Feb 13 19:27:49.872642 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:27:49.872667 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:27:49.872680 kernel: BIOS-provided physical RAM map: Feb 13 19:27:49.872688 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:27:49.872696 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:27:49.872704 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:27:49.872713 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:27:49.872721 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:27:49.872729 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:27:49.872739 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:27:49.872746 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:27:49.872754 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:27:49.872762 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:27:49.872770 kernel: NX (Execute Disable) protection: active Feb 13 19:27:49.872780 kernel: APIC: Static calls initialized Feb 13 19:27:49.872798 kernel: SMBIOS 2.8 present. Feb 13 19:27:49.872807 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:27:49.872815 kernel: Hypervisor detected: KVM Feb 13 19:27:49.872824 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:27:49.872832 kernel: kvm-clock: using sched offset of 2249750292 cycles Feb 13 19:27:49.872841 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:27:49.872850 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:27:49.872859 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:27:49.872868 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:27:49.872877 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:27:49.872888 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:27:49.872897 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:27:49.872906 kernel: Using GB pages for direct mapping Feb 13 19:27:49.872915 kernel: ACPI: Early table checksum verification disabled Feb 13 19:27:49.872923 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:27:49.872932 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:27:49.872941 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:27:49.872950 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:27:49.872958 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:27:49.872969 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:27:49.872978 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:27:49.872987 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:27:49.872995 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:27:49.873004 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:27:49.873013 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:27:49.873026 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:27:49.873037 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:27:49.873046 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:27:49.873055 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:27:49.873064 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:27:49.873073 kernel: No NUMA configuration found Feb 13 19:27:49.873082 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:27:49.873091 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:27:49.873102 kernel: Zone ranges: Feb 13 19:27:49.873111 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:27:49.873120 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:27:49.873129 kernel: Normal empty Feb 13 19:27:49.873138 kernel: Movable zone start for each node Feb 13 19:27:49.873147 kernel: Early memory node ranges Feb 13 19:27:49.873156 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:27:49.873165 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:27:49.873174 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:27:49.873185 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:27:49.873194 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:27:49.873203 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:27:49.873212 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:27:49.873221 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:27:49.873230 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:27:49.873239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:27:49.873248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:27:49.873257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:27:49.873269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:27:49.873278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:27:49.873287 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:27:49.873296 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:27:49.873305 kernel: TSC deadline timer available Feb 13 19:27:49.873314 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:27:49.873323 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:27:49.873332 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:27:49.873341 kernel: kvm-guest: setup PV sched yield Feb 13 19:27:49.873350 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:27:49.873361 kernel: Booting paravirtualized kernel on KVM Feb 13 19:27:49.873370 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:27:49.873379 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:27:49.873389 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:27:49.873416 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:27:49.873425 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:27:49.873434 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:27:49.873443 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:27:49.873453 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:27:49.873466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:27:49.873475 kernel: random: crng init done Feb 13 19:27:49.873484 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:27:49.873493 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:27:49.873502 kernel: Fallback order for Node 0: 0 Feb 13 19:27:49.873511 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:27:49.873520 kernel: Policy zone: DMA32 Feb 13 19:27:49.873529 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:27:49.873541 kernel: Memory: 2432540K/2571752K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 138952K reserved, 0K cma-reserved) Feb 13 19:27:49.873550 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:27:49.873559 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:27:49.873568 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:27:49.873577 kernel: Dynamic Preempt: voluntary Feb 13 19:27:49.873586 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:27:49.873596 kernel: rcu: RCU event tracing is enabled. Feb 13 19:27:49.873605 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:27:49.873614 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:27:49.873625 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:27:49.873635 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:27:49.873644 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:27:49.873653 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:27:49.873662 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:27:49.873671 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:27:49.873680 kernel: Console: colour VGA+ 80x25 Feb 13 19:27:49.873689 kernel: printk: console [ttyS0] enabled Feb 13 19:27:49.873698 kernel: ACPI: Core revision 20230628 Feb 13 19:27:49.873709 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:27:49.873718 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:27:49.873727 kernel: x2apic enabled Feb 13 19:27:49.873736 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:27:49.873745 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:27:49.873755 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:27:49.873764 kernel: kvm-guest: setup PV IPIs Feb 13 19:27:49.873782 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:27:49.873799 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:27:49.873808 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:27:49.873818 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:27:49.873827 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:27:49.873839 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:27:49.873848 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:27:49.873858 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:27:49.873867 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:27:49.873877 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:27:49.873888 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:27:49.873898 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:27:49.873908 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:27:49.873917 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:27:49.873927 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:27:49.873937 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:27:49.873947 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:27:49.873956 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:27:49.873968 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:27:49.873977 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:27:49.873987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:27:49.873996 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:27:49.874006 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:27:49.874015 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:27:49.874024 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:27:49.874034 kernel: landlock: Up and running. Feb 13 19:27:49.874043 kernel: SELinux: Initializing. Feb 13 19:27:49.874055 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:27:49.874064 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:27:49.874074 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:27:49.874083 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:27:49.874093 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:27:49.874103 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:27:49.874112 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:27:49.874121 kernel: ... version: 0 Feb 13 19:27:49.874131 kernel: ... bit width: 48 Feb 13 19:27:49.874142 kernel: ... generic registers: 6 Feb 13 19:27:49.874152 kernel: ... value mask: 0000ffffffffffff Feb 13 19:27:49.874161 kernel: ... max period: 00007fffffffffff Feb 13 19:27:49.874171 kernel: ... fixed-purpose events: 0 Feb 13 19:27:49.874180 kernel: ... event mask: 000000000000003f Feb 13 19:27:49.874189 kernel: signal: max sigframe size: 1776 Feb 13 19:27:49.874198 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:27:49.874208 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:27:49.874217 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:27:49.874229 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:27:49.874238 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:27:49.874248 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:27:49.874257 kernel: smpboot: Max logical packages: 1 Feb 13 19:27:49.874267 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:27:49.874276 kernel: devtmpfs: initialized Feb 13 19:27:49.874285 kernel: x86/mm: Memory block size: 128MB Feb 13 19:27:49.874295 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:27:49.874304 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:27:49.874316 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:27:49.874325 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:27:49.874334 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:27:49.874344 kernel: audit: type=2000 audit(1739474869.997:1): state=initialized audit_enabled=0 res=1 Feb 13 19:27:49.874353 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:27:49.874363 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:27:49.874372 kernel: cpuidle: using governor menu Feb 13 19:27:49.874381 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:27:49.874391 kernel: dca service started, version 1.12.1 Feb 13 19:27:49.874431 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:27:49.874449 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:27:49.874458 kernel: PCI: Using configuration type 1 for base access Feb 13 19:27:49.874468 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:27:49.874477 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:27:49.874487 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:27:49.874496 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:27:49.874506 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:27:49.874515 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:27:49.874527 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:27:49.874536 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:27:49.874546 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:27:49.874555 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:27:49.874564 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:27:49.874574 kernel: ACPI: Interpreter enabled Feb 13 19:27:49.874583 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:27:49.874593 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:27:49.874602 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:27:49.874614 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:27:49.874623 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:27:49.874633 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:27:49.874839 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:27:49.874983 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:27:49.875123 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:27:49.875135 kernel: PCI host bridge to bus 0000:00 Feb 13 19:27:49.875275 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:27:49.875414 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:27:49.875543 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:27:49.875666 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:27:49.875816 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:27:49.875958 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:27:49.876074 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:27:49.876219 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:27:49.876350 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:27:49.876494 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:27:49.876617 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:27:49.876739 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:27:49.876869 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:27:49.877007 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:27:49.877136 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:27:49.877258 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:27:49.877379 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:27:49.877544 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:27:49.877670 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:27:49.877802 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:27:49.877927 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:27:49.878063 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:27:49.878193 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:27:49.878315 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:27:49.878452 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:27:49.878577 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:27:49.878708 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:27:49.878844 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:27:49.878976 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:27:49.879098 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:27:49.879220 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:27:49.879349 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:27:49.879496 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:27:49.879508 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:27:49.879519 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:27:49.879527 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:27:49.879535 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:27:49.879543 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:27:49.879550 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:27:49.879558 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:27:49.879566 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:27:49.879574 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:27:49.879582 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:27:49.879591 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:27:49.879599 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:27:49.879607 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:27:49.879614 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:27:49.879622 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:27:49.879629 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:27:49.879637 kernel: iommu: Default domain type: Translated Feb 13 19:27:49.879645 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:27:49.879652 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:27:49.879662 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:27:49.879670 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:27:49.879678 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:27:49.879806 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:27:49.879929 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:27:49.880049 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:27:49.880059 kernel: vgaarb: loaded Feb 13 19:27:49.880067 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:27:49.880079 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:27:49.880088 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:27:49.880097 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:27:49.880106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:27:49.880114 kernel: pnp: PnP ACPI init Feb 13 19:27:49.880248 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:27:49.880260 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:27:49.880268 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:27:49.880278 kernel: NET: Registered PF_INET protocol family Feb 13 19:27:49.880286 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:27:49.880294 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:27:49.880302 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:27:49.880310 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:27:49.880317 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:27:49.880325 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:27:49.880333 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:27:49.880341 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:27:49.880350 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:27:49.880358 kernel: NET: Registered PF_XDP protocol family Feb 13 19:27:49.880487 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:27:49.880605 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:27:49.880730 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:27:49.880854 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:27:49.880969 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:27:49.881080 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:27:49.881094 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:27:49.881102 kernel: Initialise system trusted keyrings Feb 13 19:27:49.881110 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:27:49.881118 kernel: Key type asymmetric registered Feb 13 19:27:49.881126 kernel: Asymmetric key parser 'x509' registered Feb 13 19:27:49.881133 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:27:49.881141 kernel: io scheduler mq-deadline registered Feb 13 19:27:49.881149 kernel: io scheduler kyber registered Feb 13 19:27:49.881156 kernel: io scheduler bfq registered Feb 13 19:27:49.881164 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:27:49.881175 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:27:49.881182 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:27:49.881190 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:27:49.881198 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:27:49.881206 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:27:49.881213 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:27:49.881221 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:27:49.881229 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:27:49.881359 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:27:49.881373 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:27:49.881503 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:27:49.881620 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:27:49 UTC (1739474869) Feb 13 19:27:49.881736 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:27:49.881746 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:27:49.881754 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:27:49.881761 kernel: Segment Routing with IPv6 Feb 13 19:27:49.881773 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:27:49.881780 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:27:49.881796 kernel: Key type dns_resolver registered Feb 13 19:27:49.881804 kernel: IPI shorthand broadcast: enabled Feb 13 19:27:49.881811 kernel: sched_clock: Marking stable (596002895, 100025187)->(710407183, -14379101) Feb 13 19:27:49.881820 kernel: registered taskstats version 1 Feb 13 19:27:49.881827 kernel: Loading compiled-in X.509 certificates Feb 13 19:27:49.881835 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:27:49.881843 kernel: Key type .fscrypt registered Feb 13 19:27:49.881850 kernel: Key type fscrypt-provisioning registered Feb 13 19:27:49.881860 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:27:49.881868 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:27:49.881876 kernel: ima: No architecture policies found Feb 13 19:27:49.881884 kernel: clk: Disabling unused clocks Feb 13 19:27:49.881891 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:27:49.881899 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:27:49.881907 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:27:49.881914 kernel: Run /init as init process Feb 13 19:27:49.881924 kernel: with arguments: Feb 13 19:27:49.881932 kernel: /init Feb 13 19:27:49.881939 kernel: with environment: Feb 13 19:27:49.881947 kernel: HOME=/ Feb 13 19:27:49.881954 kernel: TERM=linux Feb 13 19:27:49.881962 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:27:49.881971 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:27:49.881982 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:27:49.881993 systemd[1]: Detected virtualization kvm. Feb 13 19:27:49.882001 systemd[1]: Detected architecture x86-64. Feb 13 19:27:49.882009 systemd[1]: Running in initrd. Feb 13 19:27:49.882017 systemd[1]: No hostname configured, using default hostname. Feb 13 19:27:49.882025 systemd[1]: Hostname set to . Feb 13 19:27:49.882033 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:27:49.882042 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:27:49.882050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:27:49.882061 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:27:49.882080 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:27:49.882090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:27:49.882099 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:27:49.882108 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:27:49.882120 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:27:49.882129 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:27:49.882137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:27:49.882146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:27:49.882154 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:27:49.882163 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:27:49.882171 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:27:49.882180 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:27:49.882190 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:27:49.882199 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:27:49.882207 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:27:49.882216 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:27:49.882224 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:27:49.882233 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:27:49.882241 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:27:49.882250 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:27:49.882258 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:27:49.882269 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:27:49.882278 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:27:49.882286 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:27:49.882294 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:27:49.882303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:27:49.882311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:27:49.882320 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:27:49.882328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:27:49.882339 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:27:49.882368 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 19:27:49.882392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:27:49.882451 systemd-journald[194]: Journal started Feb 13 19:27:49.882472 systemd-journald[194]: Runtime Journal (/run/log/journal/c5f8638ad877466aae27d7a7d1ea6841) is 6M, max 48.4M, 42.3M free. Feb 13 19:27:49.867658 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:27:49.903968 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:27:49.903984 kernel: Bridge firewalling registered Feb 13 19:27:49.894240 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:27:49.905604 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:27:49.906197 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:27:49.906776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:27:49.926678 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:27:49.929493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:27:49.930827 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:27:49.934984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:27:49.939248 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:27:49.941995 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:27:49.944428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:27:49.946761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:27:49.950215 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:27:49.962419 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:27:49.974643 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:27:49.985731 dracut-cmdline[233]: dracut-dracut-053 Feb 13 19:27:49.988722 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:27:49.991078 systemd-resolved[223]: Positive Trust Anchors: Feb 13 19:27:49.991086 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:27:49.991116 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:27:49.993575 systemd-resolved[223]: Defaulting to hostname 'linux'. Feb 13 19:27:49.994743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:27:49.996933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:27:50.072453 kernel: SCSI subsystem initialized Feb 13 19:27:50.081429 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:27:50.092438 kernel: iscsi: registered transport (tcp) Feb 13 19:27:50.113438 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:27:50.113510 kernel: QLogic iSCSI HBA Driver Feb 13 19:27:50.166007 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:27:50.179652 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:27:50.205134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:27:50.205219 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:27:50.205232 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:27:50.247449 kernel: raid6: avx2x4 gen() 30225 MB/s Feb 13 19:27:50.264434 kernel: raid6: avx2x2 gen() 30697 MB/s Feb 13 19:27:50.281554 kernel: raid6: avx2x1 gen() 25702 MB/s Feb 13 19:27:50.281605 kernel: raid6: using algorithm avx2x2 gen() 30697 MB/s Feb 13 19:27:50.299559 kernel: raid6: .... xor() 19807 MB/s, rmw enabled Feb 13 19:27:50.299638 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:27:50.319434 kernel: xor: automatically using best checksumming function avx Feb 13 19:27:50.468449 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:27:50.481482 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:27:50.492527 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:27:50.508072 systemd-udevd[416]: Using default interface naming scheme 'v255'. Feb 13 19:27:50.513557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:27:50.530608 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:27:50.545546 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Feb 13 19:27:50.579513 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:27:50.598680 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:27:50.661068 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:27:50.669688 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:27:50.683308 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:27:50.685943 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:27:50.686362 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:27:50.686869 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:27:50.695650 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:27:50.705458 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:27:50.711417 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:27:50.724484 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:27:50.724635 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:27:50.724647 kernel: GPT:9289727 != 19775487 Feb 13 19:27:50.724657 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:27:50.724667 kernel: GPT:9289727 != 19775487 Feb 13 19:27:50.724683 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:27:50.724695 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:27:50.724706 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:27:50.730439 kernel: libata version 3.00 loaded. Feb 13 19:27:50.738423 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:27:50.764096 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:27:50.764114 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:27:50.764284 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:27:50.764450 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:27:50.764461 kernel: AES CTR mode by8 optimization enabled Feb 13 19:27:50.764471 kernel: scsi host0: ahci Feb 13 19:27:50.764626 kernel: scsi host1: ahci Feb 13 19:27:50.764789 kernel: scsi host2: ahci Feb 13 19:27:50.764939 kernel: scsi host3: ahci Feb 13 19:27:50.765083 kernel: scsi host4: ahci Feb 13 19:27:50.765239 kernel: scsi host5: ahci Feb 13 19:27:50.765387 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:27:50.765559 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:27:50.765571 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:27:50.765582 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:27:50.765593 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:27:50.765604 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:27:50.744754 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:27:50.744886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:27:50.769503 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (478) Feb 13 19:27:50.769529 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) Feb 13 19:27:50.747223 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:27:50.749794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:27:50.749916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:27:50.752528 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:27:50.769817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:27:50.811547 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:27:50.825072 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:27:50.825985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:27:50.835487 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:27:50.835749 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:27:50.847379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:27:50.866566 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:27:50.869170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:27:50.877539 disk-uuid[559]: Primary Header is updated. Feb 13 19:27:50.877539 disk-uuid[559]: Secondary Entries is updated. Feb 13 19:27:50.877539 disk-uuid[559]: Secondary Header is updated. Feb 13 19:27:50.881432 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:27:50.885417 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:27:50.894682 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:27:51.069472 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:27:51.069546 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:27:51.069558 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:27:51.070432 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:27:51.071421 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:27:51.072420 kernel: ata3.00: applying bridge limits Feb 13 19:27:51.072435 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:27:51.073422 kernel: ata3.00: configured for UDMA/100 Feb 13 19:27:51.075423 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:27:51.079425 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:27:51.128437 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:27:51.143120 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:27:51.143135 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:27:51.887442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:27:51.887717 disk-uuid[560]: The operation has completed successfully. Feb 13 19:27:51.921247 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:27:51.921373 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:27:51.963589 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:27:51.967198 sh[596]: Success Feb 13 19:27:51.979438 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:27:52.015428 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:27:52.023975 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:27:52.026647 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:27:52.041411 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:27:52.041461 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:27:52.041472 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:27:52.041494 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:27:52.042770 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:27:52.046948 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:27:52.048577 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:27:52.055552 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:27:52.057239 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:27:52.067868 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:27:52.067919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:27:52.067931 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:27:52.071416 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:27:52.079879 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:27:52.081559 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:27:52.091076 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:27:52.097589 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:27:52.151072 ignition[692]: Ignition 2.20.0 Feb 13 19:27:52.151908 ignition[692]: Stage: fetch-offline Feb 13 19:27:52.151947 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:27:52.151957 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:27:52.152041 ignition[692]: parsed url from cmdline: "" Feb 13 19:27:52.152045 ignition[692]: no config URL provided Feb 13 19:27:52.152051 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:27:52.152059 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:27:52.152084 ignition[692]: op(1): [started] loading QEMU firmware config module Feb 13 19:27:52.152089 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:27:52.160379 ignition[692]: op(1): [finished] loading QEMU firmware config module Feb 13 19:27:52.177224 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:27:52.187587 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:27:52.200283 ignition[692]: parsing config with SHA512: d66d42f7d068e6994bb3367050c7778248a228dcd1c8b31b6e11bf754222ec1804acd7e0fb57b311e655d9b4c5c2f8a2c6e9777b1f744f1e14bf35833efc33fd Feb 13 19:27:52.204042 unknown[692]: fetched base config from "system" Feb 13 19:27:52.204784 unknown[692]: fetched user config from "qemu" Feb 13 19:27:52.205212 ignition[692]: fetch-offline: fetch-offline passed Feb 13 19:27:52.205287 ignition[692]: Ignition finished successfully Feb 13 19:27:52.209534 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:27:52.213896 systemd-networkd[785]: lo: Link UP Feb 13 19:27:52.213907 systemd-networkd[785]: lo: Gained carrier Feb 13 19:27:52.215666 systemd-networkd[785]: Enumeration completed Feb 13 19:27:52.215763 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:27:52.216041 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:27:52.216045 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:27:52.216986 systemd-networkd[785]: eth0: Link UP Feb 13 19:27:52.216990 systemd-networkd[785]: eth0: Gained carrier Feb 13 19:27:52.216997 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:27:52.219060 systemd[1]: Reached target network.target - Network. Feb 13 19:27:52.222210 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:27:52.229475 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:27:52.229566 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:27:52.246044 ignition[789]: Ignition 2.20.0 Feb 13 19:27:52.247021 ignition[789]: Stage: kargs Feb 13 19:27:52.247757 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:27:52.247774 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:27:52.250340 ignition[789]: kargs: kargs passed Feb 13 19:27:52.250387 ignition[789]: Ignition finished successfully Feb 13 19:27:52.254155 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:27:52.265593 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:27:52.276807 ignition[799]: Ignition 2.20.0 Feb 13 19:27:52.276817 ignition[799]: Stage: disks Feb 13 19:27:52.276971 ignition[799]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:27:52.276983 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:27:52.277793 ignition[799]: disks: disks passed Feb 13 19:27:52.280003 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:27:52.277838 ignition[799]: Ignition finished successfully Feb 13 19:27:52.281338 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:27:52.282836 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:27:52.284962 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:27:52.285361 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:27:52.285694 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:27:52.297643 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:27:52.308159 systemd-resolved[223]: Detected conflict on linux IN A 10.0.0.134 Feb 13 19:27:52.308173 systemd-resolved[223]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Feb 13 19:27:52.310781 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:27:52.316694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:27:53.050469 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:27:53.135508 kernel: EXT4-fs (vda9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:27:53.135808 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:27:53.136610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:27:53.145484 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:27:53.147423 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:27:53.149474 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:27:53.149533 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:27:53.149562 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:27:53.157180 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818) Feb 13 19:27:53.159096 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:27:53.159111 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:27:53.159122 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:27:53.162420 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:27:53.162889 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:27:53.165909 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:27:53.167477 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:27:53.201252 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:27:53.206534 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:27:53.211586 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:27:53.216073 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:27:53.304993 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:27:53.309626 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:27:53.312794 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:27:53.323445 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:27:53.337243 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:27:53.345409 ignition[932]: INFO : Ignition 2.20.0 Feb 13 19:27:53.345409 ignition[932]: INFO : Stage: mount Feb 13 19:27:53.347414 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:27:53.347414 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:27:53.349786 ignition[932]: INFO : mount: mount passed Feb 13 19:27:53.350572 ignition[932]: INFO : Ignition finished successfully Feb 13 19:27:53.353210 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:27:53.364497 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:27:53.462563 systemd-networkd[785]: eth0: Gained IPv6LL Feb 13 19:27:54.039997 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:27:54.053547 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:27:54.060435 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Feb 13 19:27:54.064683 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:27:54.064714 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:27:54.064729 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:27:54.068421 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:27:54.069511 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:27:54.091576 ignition[961]: INFO : Ignition 2.20.0 Feb 13 19:27:54.091576 ignition[961]: INFO : Stage: files Feb 13 19:27:54.093524 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:27:54.093524 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:27:54.096074 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:27:54.097420 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:27:54.097420 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:27:54.100869 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:27:54.102317 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:27:54.103939 unknown[961]: wrote ssh authorized keys file for user: core Feb 13 19:27:54.105146 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:27:54.107456 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:27:54.109334 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:27:54.148615 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:27:54.279988 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:27:54.279988 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:27:54.283886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:27:54.655470 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:27:55.019391 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:27:55.019391 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:27:55.022945 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:27:55.025083 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:27:55.025083 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:27:55.025083 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:27:55.029247 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:27:55.031100 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:27:55.031100 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:27:55.034146 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:27:55.050118 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:27:55.055182 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:27:55.056806 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:27:55.056806 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:27:55.059504 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:27:55.060919 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:27:55.062663 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:27:55.062663 ignition[961]: INFO : files: files passed Feb 13 19:27:55.065206 ignition[961]: INFO : Ignition finished successfully Feb 13 19:27:55.068201 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:27:55.076518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:27:55.078738 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:27:55.080597 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:27:55.080709 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:27:55.091825 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:27:55.095658 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:27:55.095658 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:27:55.098753 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:27:55.102080 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:27:55.103532 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:27:55.115518 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:27:55.136533 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:27:55.136655 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:27:55.138850 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:27:55.140885 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:27:55.142879 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:27:55.152509 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:27:55.165475 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:27:55.167087 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:27:55.179341 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:27:55.181646 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:27:55.182905 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:27:55.184794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:27:55.184900 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:27:55.187004 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:27:55.188694 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:27:55.190652 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:27:55.192699 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:27:55.194770 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:27:55.196846 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:27:55.198922 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:27:55.201174 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:27:55.203137 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:27:55.205294 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:27:55.207050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:27:55.207184 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:27:55.209277 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:27:55.210883 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:27:55.212931 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:27:55.213024 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:27:55.215127 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:27:55.215238 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:27:55.217390 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:27:55.217511 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:27:55.219499 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:27:55.221185 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:27:55.225474 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:27:55.227811 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:27:55.229480 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:27:55.231388 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:27:55.231488 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:27:55.233731 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:27:55.233814 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:27:55.235560 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:27:55.235678 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:27:55.237571 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:27:55.237681 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:27:55.248522 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:27:55.249437 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:27:55.249551 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:27:55.252230 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:27:55.253284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:27:55.253463 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:27:55.255704 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:27:55.255871 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:27:55.261817 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 19:27:55.261817 ignition[1016]: INFO : Stage: umount Feb 13 19:27:55.261817 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:27:55.261817 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:27:55.261817 ignition[1016]: INFO : umount: umount passed Feb 13 19:27:55.261817 ignition[1016]: INFO : Ignition finished successfully Feb 13 19:27:55.262980 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:27:55.263104 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:27:55.265171 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:27:55.265276 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:27:55.267715 systemd[1]: Stopped target network.target - Network. Feb 13 19:27:55.268865 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:27:55.268927 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:27:55.270654 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:27:55.270702 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:27:55.272484 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:27:55.272533 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:27:55.274462 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:27:55.274511 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:27:55.276810 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:27:55.278846 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:27:55.281848 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:27:55.287460 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:27:55.287584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:27:55.290915 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:27:55.291139 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:27:55.291256 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:27:55.294730 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:27:55.295368 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:27:55.295474 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:27:55.308497 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:27:55.308917 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:27:55.308968 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:27:55.310743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:27:55.310789 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:27:55.314625 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:27:55.314682 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:27:55.315144 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:27:55.315186 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:27:55.319511 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:27:55.320827 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:27:55.320892 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:27:55.331537 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:27:55.332562 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:27:55.339046 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:27:55.340108 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:27:55.342823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:27:55.342877 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:27:55.345976 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:27:55.346020 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:27:55.348950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:27:55.349885 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:27:55.351967 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:27:55.352864 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:27:55.354861 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:27:55.355824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:27:55.371536 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:27:55.372612 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:27:55.372672 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:27:55.374955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:27:55.375004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:27:55.377905 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:27:55.377967 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:27:55.378302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:27:55.378415 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:27:55.417938 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:27:55.418902 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:27:55.420862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:27:55.422832 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:27:55.422888 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:27:55.437532 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:27:55.443745 systemd[1]: Switching root. Feb 13 19:27:55.475493 systemd-journald[194]: Journal stopped Feb 13 19:27:56.551555 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 19:27:56.551618 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:27:56.551639 kernel: SELinux: policy capability open_perms=1 Feb 13 19:27:56.551656 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:27:56.551668 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:27:56.551683 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:27:56.551695 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:27:56.551706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:27:56.551717 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:27:56.551732 kernel: audit: type=1403 audit(1739474875.775:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:27:56.551748 systemd[1]: Successfully loaded SELinux policy in 40.534ms. Feb 13 19:27:56.551773 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.909ms. Feb 13 19:27:56.551786 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:27:56.551799 systemd[1]: Detected virtualization kvm. Feb 13 19:27:56.551810 systemd[1]: Detected architecture x86-64. Feb 13 19:27:56.551822 systemd[1]: Detected first boot. Feb 13 19:27:56.551834 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:27:56.551846 zram_generator::config[1063]: No configuration found. Feb 13 19:27:56.551861 kernel: Guest personality initialized and is inactive Feb 13 19:27:56.551872 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:27:56.551883 kernel: Initialized host personality Feb 13 19:27:56.551894 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:27:56.551906 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:27:56.551921 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:27:56.551932 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:27:56.551944 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:27:56.551959 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:27:56.551971 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:27:56.551983 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:27:56.551995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:27:56.552008 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:27:56.552020 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:27:56.552032 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:27:56.552045 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:27:56.552059 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:27:56.552071 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:27:56.552083 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:27:56.552094 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:27:56.552106 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:27:56.552124 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:27:56.552136 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:27:56.552149 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:27:56.552163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:27:56.552175 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:27:56.552187 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:27:56.552199 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:27:56.552211 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:27:56.552222 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:27:56.552240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:27:56.552252 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:27:56.552264 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:27:56.552280 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:27:56.552292 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:27:56.552303 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:27:56.552315 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:27:56.552327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:27:56.552339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:27:56.552351 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:27:56.552363 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:27:56.552375 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:27:56.552389 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:27:56.552415 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:27:56.552427 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:27:56.552439 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:27:56.552451 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:27:56.552463 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:27:56.552475 systemd[1]: Reached target machines.target - Containers. Feb 13 19:27:56.552487 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:27:56.552499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:27:56.552513 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:27:56.552525 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:27:56.552537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:27:56.552550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:27:56.552562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:27:56.552574 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:27:56.552586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:27:56.552604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:27:56.552619 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:27:56.552631 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:27:56.552642 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:27:56.552654 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:27:56.552668 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:27:56.552680 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:27:56.552692 kernel: fuse: init (API version 7.39) Feb 13 19:27:56.552704 kernel: loop: module loaded Feb 13 19:27:56.552718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:27:56.552735 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:27:56.552747 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:27:56.552759 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:27:56.552771 kernel: ACPI: bus type drm_connector registered Feb 13 19:27:56.552783 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:27:56.552795 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:27:56.552806 systemd[1]: Stopped verity-setup.service. Feb 13 19:27:56.552821 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:27:56.552835 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:27:56.552847 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:27:56.552876 systemd-journald[1138]: Collecting audit messages is disabled. Feb 13 19:27:56.552897 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:27:56.552912 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:27:56.552924 systemd-journald[1138]: Journal started Feb 13 19:27:56.552947 systemd-journald[1138]: Runtime Journal (/run/log/journal/c5f8638ad877466aae27d7a7d1ea6841) is 6M, max 48.4M, 42.3M free. Feb 13 19:27:56.323666 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:27:56.338323 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:27:56.338794 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:27:56.556499 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:27:56.557257 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:27:56.558506 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:27:56.559846 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:27:56.561330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:27:56.562911 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:27:56.563129 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:27:56.564632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:27:56.564851 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:27:56.566316 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:27:56.566541 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:27:56.567903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:27:56.568112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:27:56.569658 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:27:56.569870 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:27:56.571504 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:27:56.571719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:27:56.573145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:27:56.574680 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:27:56.576247 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:27:56.578046 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:27:56.591350 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:27:56.601514 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:27:56.603760 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:27:56.604867 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:27:56.604896 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:27:56.606839 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:27:56.609104 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:27:56.614233 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:27:56.615459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:27:56.616765 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:27:56.619191 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:27:56.620484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:27:56.623052 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:27:56.624300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:27:56.628534 systemd-journald[1138]: Time spent on flushing to /var/log/journal/c5f8638ad877466aae27d7a7d1ea6841 is 15.124ms for 964 entries. Feb 13 19:27:56.628534 systemd-journald[1138]: System Journal (/var/log/journal/c5f8638ad877466aae27d7a7d1ea6841) is 8M, max 195.6M, 187.6M free. Feb 13 19:27:56.655894 systemd-journald[1138]: Received client request to flush runtime journal. Feb 13 19:27:56.626588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:27:56.631610 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:27:56.635611 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:27:56.641187 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:27:56.642536 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:27:56.644020 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:27:56.648762 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:27:56.654306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:27:56.658374 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:27:56.667850 kernel: loop0: detected capacity change from 0 to 147912 Feb 13 19:27:56.666144 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:27:56.675353 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:27:56.679492 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:27:56.681204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:27:56.693789 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:27:56.694179 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:27:56.705716 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:27:56.707649 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:27:56.710122 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:27:56.718454 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 19:27:56.727739 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Feb 13 19:27:56.727758 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Feb 13 19:27:56.734752 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:27:56.752430 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 19:27:56.794528 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 19:27:56.806420 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 19:27:56.817832 kernel: loop5: detected capacity change from 0 to 138176 Feb 13 19:27:56.829749 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:27:56.830782 (sd-merge)[1207]: Merged extensions into '/usr'. Feb 13 19:27:56.836185 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:27:56.836204 systemd[1]: Reloading... Feb 13 19:27:56.903422 zram_generator::config[1238]: No configuration found. Feb 13 19:27:56.934318 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:27:57.023799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:27:57.087411 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:27:57.087995 systemd[1]: Reloading finished in 251 ms. Feb 13 19:27:57.107750 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:27:57.109417 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:27:57.124818 systemd[1]: Starting ensure-sysext.service... Feb 13 19:27:57.126697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:27:57.138833 systemd[1]: Reload requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:27:57.138853 systemd[1]: Reloading... Feb 13 19:27:57.148691 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:27:57.148968 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:27:57.149913 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:27:57.150187 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Feb 13 19:27:57.150262 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Feb 13 19:27:57.154410 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:27:57.154515 systemd-tmpfiles[1273]: Skipping /boot Feb 13 19:27:57.167723 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:27:57.167808 systemd-tmpfiles[1273]: Skipping /boot Feb 13 19:27:57.200430 zram_generator::config[1303]: No configuration found. Feb 13 19:27:57.308827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:27:57.373569 systemd[1]: Reloading finished in 234 ms. Feb 13 19:27:57.389070 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:27:57.406380 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:27:57.428839 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:27:57.431998 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:27:57.434775 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:27:57.439815 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:27:57.444260 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:27:57.450497 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:27:57.455081 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:27:57.455340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:27:57.460235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:27:57.463306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:27:57.469141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:27:57.470389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:27:57.470545 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:27:57.473146 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:27:57.474512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:27:57.475901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:27:57.476761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:27:57.478488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:27:57.479795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:27:57.481894 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:27:57.483970 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:27:57.484680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:27:57.485888 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Feb 13 19:27:57.486514 augenrules[1370]: No rules Feb 13 19:27:57.487246 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:27:57.487600 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:27:57.497813 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:27:57.498357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:27:57.509839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:27:57.516443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:27:57.521801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:27:57.523549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:27:57.523722 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:27:57.533706 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:27:57.534833 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:27:57.537465 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:27:57.540016 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:27:57.542851 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:27:57.545997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:27:57.546208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:27:57.550017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:27:57.550228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:27:57.552298 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:27:57.553666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:27:57.563483 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:27:57.572839 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:27:57.575447 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1388) Feb 13 19:27:57.602129 systemd[1]: Finished ensure-sysext.service. Feb 13 19:27:57.604220 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:27:57.621465 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:27:57.622111 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:27:57.628897 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:27:57.631111 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:27:57.644197 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:27:57.644504 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:27:57.644713 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:27:57.640714 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:27:57.644067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:27:57.645655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:27:57.648750 systemd-resolved[1350]: Positive Trust Anchors: Feb 13 19:27:57.648784 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:27:57.648822 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:27:57.651084 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:27:57.654557 systemd-resolved[1350]: Defaulting to hostname 'linux'. Feb 13 19:27:57.655524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:27:57.657837 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:27:57.659004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:27:57.663317 augenrules[1422]: /sbin/augenrules: No change Feb 13 19:27:57.663441 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:27:57.665037 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:27:57.666579 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:27:57.668416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:27:57.672701 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:27:57.673879 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:27:57.673905 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:27:57.674370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:27:57.677018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:27:57.677229 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:27:57.680035 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:27:57.680242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:27:57.681810 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:27:57.682415 augenrules[1447]: No rules Feb 13 19:27:57.686667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:27:57.688624 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:27:57.688869 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:27:57.691078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:27:57.691306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:27:57.706926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:27:57.709259 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:27:57.710966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:27:57.711034 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:27:57.779291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:27:57.788418 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:27:57.804504 kernel: kvm_amd: TSC scaling supported Feb 13 19:27:57.804530 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:27:57.804543 kernel: kvm_amd: Nested Paging enabled Feb 13 19:27:57.805496 kernel: kvm_amd: LBR virtualization supported Feb 13 19:27:57.805516 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:27:57.805533 kernel: kvm_amd: Virtual GIF supported Feb 13 19:27:57.820728 systemd-networkd[1442]: lo: Link UP Feb 13 19:27:57.820741 systemd-networkd[1442]: lo: Gained carrier Feb 13 19:27:57.822845 systemd-networkd[1442]: Enumeration completed Feb 13 19:27:57.823226 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:27:57.823239 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:27:57.823501 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:27:57.824005 systemd[1]: Reached target network.target - Network. Feb 13 19:27:57.824831 systemd-networkd[1442]: eth0: Link UP Feb 13 19:27:57.824836 systemd-networkd[1442]: eth0: Gained carrier Feb 13 19:27:57.824849 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:27:57.832798 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:27:57.836555 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:27:57.836560 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:27:57.836907 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:27:57.837236 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:27:57.846476 systemd-networkd[1442]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:27:57.847252 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Feb 13 19:27:58.940318 systemd-timesyncd[1445]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:27:58.940374 systemd-timesyncd[1445]: Initial clock synchronization to Thu 2025-02-13 19:27:58.940175 UTC. Feb 13 19:27:58.940523 systemd-resolved[1350]: Clock change detected. Flushing caches. Feb 13 19:27:58.941807 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:27:58.956283 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:27:58.981270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:27:58.993171 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:27:59.002260 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:27:59.035388 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:27:59.037043 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:27:59.038227 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:27:59.039411 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:27:59.040695 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:27:59.042171 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:27:59.043406 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:27:59.044680 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:27:59.045944 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:27:59.045978 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:27:59.047023 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:27:59.048784 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:27:59.051602 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:27:59.055182 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:27:59.056632 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:27:59.057896 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:27:59.061777 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:27:59.063219 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:27:59.065597 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:27:59.067248 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:27:59.068411 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:27:59.069378 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:27:59.070346 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:27:59.070381 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:27:59.071396 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:27:59.073498 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:27:59.077058 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:27:59.081806 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:27:59.082160 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:27:59.084463 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:27:59.085213 jq[1482]: false Feb 13 19:27:59.086501 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:27:59.090678 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:27:59.095272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:27:59.099005 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:27:59.106664 dbus-daemon[1481]: [system] SELinux support is enabled Feb 13 19:27:59.107916 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:27:59.108886 extend-filesystems[1483]: Found loop3 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found loop4 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found loop5 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found sr0 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found vda Feb 13 19:27:59.108886 extend-filesystems[1483]: Found vda1 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found vda2 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found vda3 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found usr Feb 13 19:27:59.108886 extend-filesystems[1483]: Found vda4 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found vda6 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found vda7 Feb 13 19:27:59.108886 extend-filesystems[1483]: Found vda9 Feb 13 19:27:59.108886 extend-filesystems[1483]: Checking size of /dev/vda9 Feb 13 19:27:59.112496 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:27:59.113726 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:27:59.124407 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:27:59.126361 extend-filesystems[1483]: Resized partition /dev/vda9 Feb 13 19:27:59.128856 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:27:59.130330 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:27:59.139755 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:27:59.139793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1388) Feb 13 19:27:59.133731 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:27:59.147828 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:27:59.158066 jq[1504]: true Feb 13 19:27:59.188169 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:27:59.188206 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:27:59.188206 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:27:59.188206 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:27:59.159347 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:27:59.198006 update_engine[1499]: I20250213 19:27:59.161882 1499 main.cc:92] Flatcar Update Engine starting Feb 13 19:27:59.198006 update_engine[1499]: I20250213 19:27:59.167541 1499 update_check_scheduler.cc:74] Next update check in 6m2s Feb 13 19:27:59.198232 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Feb 13 19:27:59.159606 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:27:59.159991 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:27:59.200605 jq[1508]: true Feb 13 19:27:59.160261 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:27:59.162472 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:27:59.162704 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:27:59.176332 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:27:59.189313 systemd-logind[1491]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:27:59.189338 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:27:59.191132 systemd-logind[1491]: New seat seat0. Feb 13 19:27:59.193536 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:27:59.198302 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:27:59.198586 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:27:59.214987 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:27:59.216234 tar[1507]: linux-amd64/LICENSE Feb 13 19:27:59.216482 tar[1507]: linux-amd64/helm Feb 13 19:27:59.217480 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:27:59.217625 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:27:59.220899 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:27:59.221021 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:27:59.223607 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:27:59.230837 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:27:59.234864 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:27:59.236958 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:27:59.241096 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:27:59.259354 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:27:59.267186 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:27:59.268017 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:27:59.280060 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:27:59.280303 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:27:59.294235 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:27:59.304833 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:27:59.308254 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:27:59.311587 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:27:59.314034 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:27:59.381275 containerd[1509]: time="2025-02-13T19:27:59.381183991Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:27:59.404943 containerd[1509]: time="2025-02-13T19:27:59.404877687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:27:59.406604 containerd[1509]: time="2025-02-13T19:27:59.406565822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:27:59.406604 containerd[1509]: time="2025-02-13T19:27:59.406602140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:27:59.406657 containerd[1509]: time="2025-02-13T19:27:59.406618100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:27:59.406839 containerd[1509]: time="2025-02-13T19:27:59.406818687Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:27:59.406876 containerd[1509]: time="2025-02-13T19:27:59.406839886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:27:59.406935 containerd[1509]: time="2025-02-13T19:27:59.406908886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:27:59.406957 containerd[1509]: time="2025-02-13T19:27:59.406934564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:27:59.407203 containerd[1509]: time="2025-02-13T19:27:59.407182338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:27:59.407203 containerd[1509]: time="2025-02-13T19:27:59.407200763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:27:59.407251 containerd[1509]: time="2025-02-13T19:27:59.407214479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:27:59.407251 containerd[1509]: time="2025-02-13T19:27:59.407225660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:27:59.407350 containerd[1509]: time="2025-02-13T19:27:59.407328042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:27:59.407592 containerd[1509]: time="2025-02-13T19:27:59.407572089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:27:59.407797 containerd[1509]: time="2025-02-13T19:27:59.407729254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:27:59.407797 containerd[1509]: time="2025-02-13T19:27:59.407747067Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:27:59.407887 containerd[1509]: time="2025-02-13T19:27:59.407869156Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:27:59.407952 containerd[1509]: time="2025-02-13T19:27:59.407936593Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:27:59.428113 containerd[1509]: time="2025-02-13T19:27:59.428056047Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:27:59.428163 containerd[1509]: time="2025-02-13T19:27:59.428114847Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:27:59.428163 containerd[1509]: time="2025-02-13T19:27:59.428133492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:27:59.428163 containerd[1509]: time="2025-02-13T19:27:59.428149872Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:27:59.428217 containerd[1509]: time="2025-02-13T19:27:59.428165251Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:27:59.428321 containerd[1509]: time="2025-02-13T19:27:59.428300285Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:27:59.428563 containerd[1509]: time="2025-02-13T19:27:59.428534664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:27:59.428697 containerd[1509]: time="2025-02-13T19:27:59.428638669Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:27:59.428697 containerd[1509]: time="2025-02-13T19:27:59.428659558Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:27:59.428697 containerd[1509]: time="2025-02-13T19:27:59.428673665Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:27:59.428697 containerd[1509]: time="2025-02-13T19:27:59.428688522Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:27:59.428786 containerd[1509]: time="2025-02-13T19:27:59.428701827Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:27:59.428786 containerd[1509]: time="2025-02-13T19:27:59.428719911Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:27:59.428786 containerd[1509]: time="2025-02-13T19:27:59.428732856Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:27:59.428786 containerd[1509]: time="2025-02-13T19:27:59.428747202Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:27:59.428786 containerd[1509]: time="2025-02-13T19:27:59.428776668Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:27:59.428880 containerd[1509]: time="2025-02-13T19:27:59.428789362Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:27:59.428880 containerd[1509]: time="2025-02-13T19:27:59.428801104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:27:59.428880 containerd[1509]: time="2025-02-13T19:27:59.428821452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.428880 containerd[1509]: time="2025-02-13T19:27:59.428835388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.428880 containerd[1509]: time="2025-02-13T19:27:59.428847861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.428880 containerd[1509]: time="2025-02-13T19:27:59.428859703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.428880 containerd[1509]: time="2025-02-13T19:27:59.428871165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.428880 containerd[1509]: time="2025-02-13T19:27:59.428883488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.428897023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.428909146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.428928993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.428945444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.428956124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.428967185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.428978646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.428991300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.429009364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.429020585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429036 containerd[1509]: time="2025-02-13T19:27:59.429035834Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:27:59.429237 containerd[1509]: time="2025-02-13T19:27:59.429081579Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:27:59.429237 containerd[1509]: time="2025-02-13T19:27:59.429096177Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:27:59.429237 containerd[1509]: time="2025-02-13T19:27:59.429106236Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:27:59.429237 containerd[1509]: time="2025-02-13T19:27:59.429117206Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:27:59.429237 containerd[1509]: time="2025-02-13T19:27:59.429126013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429237 containerd[1509]: time="2025-02-13T19:27:59.429137414Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:27:59.429237 containerd[1509]: time="2025-02-13T19:27:59.429146942Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:27:59.429237 containerd[1509]: time="2025-02-13T19:27:59.429156800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:27:59.429460 containerd[1509]: time="2025-02-13T19:27:59.429406929Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:27:59.429460 containerd[1509]: time="2025-02-13T19:27:59.429448998Z" level=info msg="Connect containerd service" Feb 13 19:27:59.429598 containerd[1509]: time="2025-02-13T19:27:59.429476911Z" level=info msg="using legacy CRI server" Feb 13 19:27:59.429598 containerd[1509]: time="2025-02-13T19:27:59.429483834Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:27:59.429598 containerd[1509]: time="2025-02-13T19:27:59.429587318Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:27:59.430193 containerd[1509]: time="2025-02-13T19:27:59.430160773Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:27:59.430359 containerd[1509]: time="2025-02-13T19:27:59.430321084Z" level=info msg="Start subscribing containerd event" Feb 13 19:27:59.430387 containerd[1509]: time="2025-02-13T19:27:59.430369454Z" level=info msg="Start recovering state" Feb 13 19:27:59.430453 containerd[1509]: time="2025-02-13T19:27:59.430432462Z" level=info msg="Start event monitor" Feb 13 19:27:59.430487 containerd[1509]: time="2025-02-13T19:27:59.430460204Z" level=info msg="Start snapshots syncer" Feb 13 19:27:59.430487 containerd[1509]: time="2025-02-13T19:27:59.430434135Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:27:59.430525 containerd[1509]: time="2025-02-13T19:27:59.430472187Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:27:59.430525 containerd[1509]: time="2025-02-13T19:27:59.430507022Z" level=info msg="Start streaming server" Feb 13 19:27:59.430561 containerd[1509]: time="2025-02-13T19:27:59.430549562Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:27:59.430679 containerd[1509]: time="2025-02-13T19:27:59.430623831Z" level=info msg="containerd successfully booted in 0.050614s" Feb 13 19:27:59.430711 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:27:59.605310 tar[1507]: linux-amd64/README.md Feb 13 19:27:59.621397 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:27:59.682832 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:27:59.685104 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:48932.service - OpenSSH per-connection server daemon (10.0.0.1:48932). Feb 13 19:27:59.737217 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 48932 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:27:59.739348 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:27:59.749874 systemd-logind[1491]: New session 1 of user core. Feb 13 19:27:59.751151 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:27:59.769965 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:27:59.780393 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:27:59.789990 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:27:59.793519 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:27:59.795874 systemd-logind[1491]: New session c1 of user core. Feb 13 19:27:59.925048 systemd[1576]: Queued start job for default target default.target. Feb 13 19:27:59.934032 systemd[1576]: Created slice app.slice - User Application Slice. Feb 13 19:27:59.934056 systemd[1576]: Reached target paths.target - Paths. Feb 13 19:27:59.934095 systemd[1576]: Reached target timers.target - Timers. Feb 13 19:27:59.935615 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:27:59.946530 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:27:59.946671 systemd[1576]: Reached target sockets.target - Sockets. Feb 13 19:27:59.946710 systemd[1576]: Reached target basic.target - Basic System. Feb 13 19:27:59.946777 systemd[1576]: Reached target default.target - Main User Target. Feb 13 19:27:59.946817 systemd[1576]: Startup finished in 144ms. Feb 13 19:27:59.947109 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:27:59.949728 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:28:00.013342 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:58982.service - OpenSSH per-connection server daemon (10.0.0.1:58982). Feb 13 19:28:00.054962 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 58982 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:28:00.056381 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:00.060636 systemd-logind[1491]: New session 2 of user core. Feb 13 19:28:00.069912 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:28:00.123127 sshd[1589]: Connection closed by 10.0.0.1 port 58982 Feb 13 19:28:00.123489 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:00.135817 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:58982.service: Deactivated successfully. Feb 13 19:28:00.137820 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:28:00.139425 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:28:00.148075 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:58984.service - OpenSSH per-connection server daemon (10.0.0.1:58984). Feb 13 19:28:00.150568 systemd-logind[1491]: Removed session 2. Feb 13 19:28:00.183183 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 58984 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:28:00.184452 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:00.188778 systemd-logind[1491]: New session 3 of user core. Feb 13 19:28:00.198905 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:28:00.252110 sshd[1597]: Connection closed by 10.0.0.1 port 58984 Feb 13 19:28:00.252363 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:00.256402 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:58984.service: Deactivated successfully. Feb 13 19:28:00.258353 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:28:00.258992 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:28:00.259821 systemd-logind[1491]: Removed session 3. Feb 13 19:28:00.377924 systemd-networkd[1442]: eth0: Gained IPv6LL Feb 13 19:28:00.380706 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:28:00.382453 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:28:00.393034 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:28:00.395828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:00.398183 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:28:00.414103 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:28:00.414468 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:28:00.416115 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:28:00.419408 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:28:01.055435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:01.057038 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:28:01.058302 systemd[1]: Startup finished in 728ms (kernel) + 6.082s (initrd) + 4.229s (userspace) = 11.040s. Feb 13 19:28:01.088110 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:28:01.488401 kubelet[1624]: E0213 19:28:01.488250 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:28:01.492202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:28:01.492424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:28:01.492870 systemd[1]: kubelet.service: Consumed 947ms CPU time, 255.2M memory peak. Feb 13 19:28:10.264581 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:53236.service - OpenSSH per-connection server daemon (10.0.0.1:53236). Feb 13 19:28:10.302449 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 53236 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:28:10.303741 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:10.308018 systemd-logind[1491]: New session 4 of user core. Feb 13 19:28:10.317887 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:28:10.371189 sshd[1640]: Connection closed by 10.0.0.1 port 53236 Feb 13 19:28:10.371643 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:10.387263 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:53236.service: Deactivated successfully. Feb 13 19:28:10.388992 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:28:10.390552 systemd-logind[1491]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:28:10.395050 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:53244.service - OpenSSH per-connection server daemon (10.0.0.1:53244). Feb 13 19:28:10.396084 systemd-logind[1491]: Removed session 4. Feb 13 19:28:10.433072 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 53244 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:28:10.434483 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:10.438545 systemd-logind[1491]: New session 5 of user core. Feb 13 19:28:10.447887 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:28:10.496800 sshd[1648]: Connection closed by 10.0.0.1 port 53244 Feb 13 19:28:10.497141 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:10.514170 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:53244.service: Deactivated successfully. Feb 13 19:28:10.516541 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:28:10.518652 systemd-logind[1491]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:28:10.520135 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:53260.service - OpenSSH per-connection server daemon (10.0.0.1:53260). Feb 13 19:28:10.521010 systemd-logind[1491]: Removed session 5. Feb 13 19:28:10.559129 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 53260 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:28:10.560646 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:10.565262 systemd-logind[1491]: New session 6 of user core. Feb 13 19:28:10.575894 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:28:10.628253 sshd[1656]: Connection closed by 10.0.0.1 port 53260 Feb 13 19:28:10.628664 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:10.643311 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:53260.service: Deactivated successfully. Feb 13 19:28:10.645158 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:28:10.646519 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:28:10.654991 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:53276.service - OpenSSH per-connection server daemon (10.0.0.1:53276). Feb 13 19:28:10.655824 systemd-logind[1491]: Removed session 6. Feb 13 19:28:10.690328 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 53276 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:28:10.691533 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:10.695401 systemd-logind[1491]: New session 7 of user core. Feb 13 19:28:10.710888 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:28:10.768641 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:28:10.769043 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:28:10.788758 sudo[1665]: pam_unix(sudo:session): session closed for user root Feb 13 19:28:10.790460 sshd[1664]: Connection closed by 10.0.0.1 port 53276 Feb 13 19:28:10.790825 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:10.813971 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:53276.service: Deactivated successfully. Feb 13 19:28:10.815814 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:28:10.817805 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:28:10.827081 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:53292.service - OpenSSH per-connection server daemon (10.0.0.1:53292). Feb 13 19:28:10.828036 systemd-logind[1491]: Removed session 7. Feb 13 19:28:10.863258 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 53292 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:28:10.864666 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:10.869090 systemd-logind[1491]: New session 8 of user core. Feb 13 19:28:10.878900 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:28:10.933164 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:28:10.933532 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:28:10.937261 sudo[1675]: pam_unix(sudo:session): session closed for user root Feb 13 19:28:10.943493 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:28:10.943834 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:28:10.964033 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:28:10.993638 augenrules[1697]: No rules Feb 13 19:28:10.994532 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:28:10.994869 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:28:10.996027 sudo[1674]: pam_unix(sudo:session): session closed for user root Feb 13 19:28:10.997486 sshd[1673]: Connection closed by 10.0.0.1 port 53292 Feb 13 19:28:10.997810 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:11.007656 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:53292.service: Deactivated successfully. Feb 13 19:28:11.009493 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:28:11.011073 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:28:11.021004 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:53298.service - OpenSSH per-connection server daemon (10.0.0.1:53298). Feb 13 19:28:11.022177 systemd-logind[1491]: Removed session 8. Feb 13 19:28:11.056910 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 53298 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:28:11.058404 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:28:11.063378 systemd-logind[1491]: New session 9 of user core. Feb 13 19:28:11.073894 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:28:11.126833 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:28:11.127162 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:28:11.459007 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:28:11.459127 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:28:11.714289 dockerd[1728]: time="2025-02-13T19:28:11.714148617Z" level=info msg="Starting up" Feb 13 19:28:11.719546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:28:11.726905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:11.944300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:11.949232 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:28:12.047918 kubelet[1760]: E0213 19:28:12.047185 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:28:12.054465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:28:12.054681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:28:12.055074 systemd[1]: kubelet.service: Consumed 205ms CPU time, 104.1M memory peak. Feb 13 19:28:12.095548 dockerd[1728]: time="2025-02-13T19:28:12.095506322Z" level=info msg="Loading containers: start." Feb 13 19:28:12.277796 kernel: Initializing XFRM netlink socket Feb 13 19:28:12.362210 systemd-networkd[1442]: docker0: Link UP Feb 13 19:28:12.398122 dockerd[1728]: time="2025-02-13T19:28:12.398091460Z" level=info msg="Loading containers: done." Feb 13 19:28:12.411323 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1817603715-merged.mount: Deactivated successfully. Feb 13 19:28:12.414228 dockerd[1728]: time="2025-02-13T19:28:12.414187860Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:28:12.414290 dockerd[1728]: time="2025-02-13T19:28:12.414276817Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:28:12.414417 dockerd[1728]: time="2025-02-13T19:28:12.414392845Z" level=info msg="Daemon has completed initialization" Feb 13 19:28:12.449580 dockerd[1728]: time="2025-02-13T19:28:12.449498488Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:28:12.449669 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:28:12.936311 containerd[1509]: time="2025-02-13T19:28:12.936274593Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:28:14.794961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049577043.mount: Deactivated successfully. Feb 13 19:28:15.640092 containerd[1509]: time="2025-02-13T19:28:15.640028601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:15.640844 containerd[1509]: time="2025-02-13T19:28:15.640783887Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:28:15.642108 containerd[1509]: time="2025-02-13T19:28:15.642062925Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:15.644677 containerd[1509]: time="2025-02-13T19:28:15.644631411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:15.645673 containerd[1509]: time="2025-02-13T19:28:15.645642627Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 2.709329272s" Feb 13 19:28:15.645709 containerd[1509]: time="2025-02-13T19:28:15.645673265Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:28:15.646305 containerd[1509]: time="2025-02-13T19:28:15.646267699Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:28:16.723306 containerd[1509]: time="2025-02-13T19:28:16.723247867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:16.723984 containerd[1509]: time="2025-02-13T19:28:16.723952648Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:28:16.725329 containerd[1509]: time="2025-02-13T19:28:16.725280267Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:16.727905 containerd[1509]: time="2025-02-13T19:28:16.727850927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:16.728925 containerd[1509]: time="2025-02-13T19:28:16.728871200Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.082571712s" Feb 13 19:28:16.728925 containerd[1509]: time="2025-02-13T19:28:16.728918459Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:28:16.729508 containerd[1509]: time="2025-02-13T19:28:16.729404520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:28:17.977939 containerd[1509]: time="2025-02-13T19:28:17.977874621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:17.978536 containerd[1509]: time="2025-02-13T19:28:17.978503250Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:28:17.979576 containerd[1509]: time="2025-02-13T19:28:17.979547378Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:17.982137 containerd[1509]: time="2025-02-13T19:28:17.982093322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:17.983093 containerd[1509]: time="2025-02-13T19:28:17.983041940Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.253598607s" Feb 13 19:28:17.983093 containerd[1509]: time="2025-02-13T19:28:17.983073770Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:28:17.983513 containerd[1509]: time="2025-02-13T19:28:17.983490201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:28:18.855922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041401122.mount: Deactivated successfully. Feb 13 19:28:19.523219 containerd[1509]: time="2025-02-13T19:28:19.523148239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:19.523974 containerd[1509]: time="2025-02-13T19:28:19.523936607Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:28:19.525163 containerd[1509]: time="2025-02-13T19:28:19.525130075Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:19.527247 containerd[1509]: time="2025-02-13T19:28:19.527201880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:19.528080 containerd[1509]: time="2025-02-13T19:28:19.528037106Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.544515576s" Feb 13 19:28:19.528117 containerd[1509]: time="2025-02-13T19:28:19.528085086Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:28:19.528574 containerd[1509]: time="2025-02-13T19:28:19.528540480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:28:20.038460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280717762.mount: Deactivated successfully. Feb 13 19:28:20.773498 containerd[1509]: time="2025-02-13T19:28:20.773427853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:20.774291 containerd[1509]: time="2025-02-13T19:28:20.774225398Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:28:20.775895 containerd[1509]: time="2025-02-13T19:28:20.775863079Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:20.778846 containerd[1509]: time="2025-02-13T19:28:20.778815315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:20.780027 containerd[1509]: time="2025-02-13T19:28:20.779970450Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.251398633s" Feb 13 19:28:20.780027 containerd[1509]: time="2025-02-13T19:28:20.780020294Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:28:20.780526 containerd[1509]: time="2025-02-13T19:28:20.780495064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:28:21.210117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230523457.mount: Deactivated successfully. Feb 13 19:28:21.215744 containerd[1509]: time="2025-02-13T19:28:21.215689744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:21.216461 containerd[1509]: time="2025-02-13T19:28:21.216419202Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:28:21.217588 containerd[1509]: time="2025-02-13T19:28:21.217543219Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:21.219606 containerd[1509]: time="2025-02-13T19:28:21.219559680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:21.220418 containerd[1509]: time="2025-02-13T19:28:21.220375640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 439.849798ms" Feb 13 19:28:21.220418 containerd[1509]: time="2025-02-13T19:28:21.220402581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:28:21.220911 containerd[1509]: time="2025-02-13T19:28:21.220869516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:28:21.746616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12953699.mount: Deactivated successfully. Feb 13 19:28:22.276675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:28:22.285926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:22.631185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:22.635071 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:28:22.708021 kubelet[2089]: E0213 19:28:22.707965 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:28:22.712230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:28:22.712442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:28:22.712817 systemd[1]: kubelet.service: Consumed 199ms CPU time, 106.6M memory peak. Feb 13 19:28:23.978372 containerd[1509]: time="2025-02-13T19:28:23.978323472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:23.979288 containerd[1509]: time="2025-02-13T19:28:23.979241112Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:28:23.980636 containerd[1509]: time="2025-02-13T19:28:23.980590492Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:23.983426 containerd[1509]: time="2025-02-13T19:28:23.983399299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:23.984609 containerd[1509]: time="2025-02-13T19:28:23.984565175Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.763660152s" Feb 13 19:28:23.984650 containerd[1509]: time="2025-02-13T19:28:23.984607835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:28:26.027782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:26.027953 systemd[1]: kubelet.service: Consumed 199ms CPU time, 106.6M memory peak. Feb 13 19:28:26.042997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:26.067585 systemd[1]: Reload requested from client PID 2170 ('systemctl') (unit session-9.scope)... Feb 13 19:28:26.067600 systemd[1]: Reloading... Feb 13 19:28:26.160793 zram_generator::config[2218]: No configuration found. Feb 13 19:28:26.442051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:28:26.544163 systemd[1]: Reloading finished in 476 ms. Feb 13 19:28:26.585797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:26.590082 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:28:26.590515 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:26.591705 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:28:26.592029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:26.592076 systemd[1]: kubelet.service: Consumed 139ms CPU time, 91.9M memory peak. Feb 13 19:28:26.593749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:26.748244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:26.752020 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:28:26.786530 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:28:26.786530 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:28:26.786530 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:28:26.786932 kubelet[2265]: I0213 19:28:26.786590 2265 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:28:27.050181 kubelet[2265]: I0213 19:28:27.050123 2265 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:28:27.050181 kubelet[2265]: I0213 19:28:27.050161 2265 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:28:27.050438 kubelet[2265]: I0213 19:28:27.050415 2265 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:28:27.071621 kubelet[2265]: I0213 19:28:27.071587 2265 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:28:27.072124 kubelet[2265]: E0213 19:28:27.072059 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:27.078317 kubelet[2265]: E0213 19:28:27.078255 2265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:28:27.078317 kubelet[2265]: I0213 19:28:27.078308 2265 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:28:27.083728 kubelet[2265]: I0213 19:28:27.083705 2265 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:28:27.084843 kubelet[2265]: I0213 19:28:27.084799 2265 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:28:27.085013 kubelet[2265]: I0213 19:28:27.084830 2265 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:28:27.085013 kubelet[2265]: I0213 19:28:27.085010 2265 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:28:27.085152 kubelet[2265]: I0213 19:28:27.085019 2265 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:28:27.085178 kubelet[2265]: I0213 19:28:27.085161 2265 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:28:27.087496 kubelet[2265]: I0213 19:28:27.087465 2265 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:28:27.087532 kubelet[2265]: I0213 19:28:27.087496 2265 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:28:27.087532 kubelet[2265]: I0213 19:28:27.087514 2265 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:28:27.087532 kubelet[2265]: I0213 19:28:27.087526 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:28:27.089977 kubelet[2265]: I0213 19:28:27.089958 2265 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:28:27.090798 kubelet[2265]: I0213 19:28:27.090324 2265 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:28:27.090798 kubelet[2265]: W0213 19:28:27.090624 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:28:27.090798 kubelet[2265]: E0213 19:28:27.090679 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:27.091051 kubelet[2265]: W0213 19:28:27.091007 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:28:27.091051 kubelet[2265]: E0213 19:28:27.091053 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:27.091226 kubelet[2265]: W0213 19:28:27.091217 2265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:28:27.093007 kubelet[2265]: I0213 19:28:27.092987 2265 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:28:27.093066 kubelet[2265]: I0213 19:28:27.093017 2265 server.go:1287] "Started kubelet" Feb 13 19:28:27.094819 kubelet[2265]: I0213 19:28:27.094351 2265 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:28:27.094819 kubelet[2265]: I0213 19:28:27.094359 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:28:27.094819 kubelet[2265]: I0213 19:28:27.094658 2265 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:28:27.095700 kubelet[2265]: I0213 19:28:27.095683 2265 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:28:27.097126 kubelet[2265]: I0213 19:28:27.096602 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:28:27.097126 kubelet[2265]: I0213 19:28:27.096649 2265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:28:27.097126 kubelet[2265]: I0213 19:28:27.096693 2265 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:28:27.097126 kubelet[2265]: E0213 19:28:27.096960 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:28:27.097840 kubelet[2265]: W0213 19:28:27.097412 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:28:27.097840 kubelet[2265]: E0213 19:28:27.097466 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:27.097840 kubelet[2265]: E0213 19:28:27.097626 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Feb 13 19:28:27.097840 kubelet[2265]: I0213 19:28:27.097648 2265 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:28:27.097840 kubelet[2265]: I0213 19:28:27.097713 2265 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:28:27.099249 kubelet[2265]: I0213 19:28:27.099223 2265 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:28:27.099401 kubelet[2265]: I0213 19:28:27.099382 2265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:28:27.100483 kubelet[2265]: E0213 19:28:27.099067 2265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db363cfbe38d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:28:27.093001101 +0000 UTC m=+0.337000356,LastTimestamp:2025-02-13 19:28:27.093001101 +0000 UTC m=+0.337000356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:28:27.100602 kubelet[2265]: E0213 19:28:27.100467 2265 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:28:27.100706 kubelet[2265]: I0213 19:28:27.100680 2265 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:28:27.113027 kubelet[2265]: I0213 19:28:27.113010 2265 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:28:27.113104 kubelet[2265]: I0213 19:28:27.113093 2265 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:28:27.113211 kubelet[2265]: I0213 19:28:27.113201 2265 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:28:27.115716 kubelet[2265]: I0213 19:28:27.115668 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:28:27.116986 kubelet[2265]: I0213 19:28:27.116958 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:28:27.116986 kubelet[2265]: I0213 19:28:27.116985 2265 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:28:27.117061 kubelet[2265]: I0213 19:28:27.117006 2265 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:28:27.117061 kubelet[2265]: I0213 19:28:27.117015 2265 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:28:27.117105 kubelet[2265]: E0213 19:28:27.117061 2265 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:28:27.118292 kubelet[2265]: W0213 19:28:27.118258 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:28:27.118408 kubelet[2265]: E0213 19:28:27.118290 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:27.197509 kubelet[2265]: E0213 19:28:27.197359 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:28:27.217889 kubelet[2265]: E0213 19:28:27.217826 2265 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:28:27.298545 kubelet[2265]: E0213 19:28:27.298494 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:28:27.298943 kubelet[2265]: E0213 19:28:27.298897 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Feb 13 19:28:27.399312 kubelet[2265]: E0213 19:28:27.399185 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:28:27.418447 kubelet[2265]: E0213 19:28:27.418379 2265 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:28:27.499984 kubelet[2265]: E0213 19:28:27.499936 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:28:27.503959 kubelet[2265]: I0213 19:28:27.503919 2265 policy_none.go:49] "None policy: Start" Feb 13 19:28:27.503959 kubelet[2265]: I0213 19:28:27.503960 2265 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:28:27.504065 kubelet[2265]: I0213 19:28:27.503973 2265 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:28:27.538020 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:28:27.553239 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:28:27.556186 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:28:27.566815 kubelet[2265]: I0213 19:28:27.566779 2265 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:28:27.567074 kubelet[2265]: I0213 19:28:27.567005 2265 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:28:27.567074 kubelet[2265]: I0213 19:28:27.567024 2265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:28:27.567316 kubelet[2265]: I0213 19:28:27.567289 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:28:27.568266 kubelet[2265]: E0213 19:28:27.568240 2265 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:28:27.568341 kubelet[2265]: E0213 19:28:27.568293 2265 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:28:27.669099 kubelet[2265]: I0213 19:28:27.668967 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:28:27.669483 kubelet[2265]: E0213 19:28:27.669438 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Feb 13 19:28:27.700085 kubelet[2265]: E0213 19:28:27.700044 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Feb 13 19:28:27.826295 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:28:27.843096 kubelet[2265]: E0213 19:28:27.843066 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:28:27.844898 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:28:27.864802 kubelet[2265]: E0213 19:28:27.864786 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:28:27.867469 systemd[1]: Created slice kubepods-burstable-podc2d7491312bcda4fb1d357d427d514ed.slice - libcontainer container kubepods-burstable-podc2d7491312bcda4fb1d357d427d514ed.slice. Feb 13 19:28:27.868898 kubelet[2265]: E0213 19:28:27.868877 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:28:27.870482 kubelet[2265]: I0213 19:28:27.870469 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:28:27.870847 kubelet[2265]: E0213 19:28:27.870812 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Feb 13 19:28:27.896184 kubelet[2265]: W0213 19:28:27.896130 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:28:27.896233 kubelet[2265]: E0213 19:28:27.896188 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:27.902454 kubelet[2265]: I0213 19:28:27.902424 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2d7491312bcda4fb1d357d427d514ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2d7491312bcda4fb1d357d427d514ed\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:27.902498 kubelet[2265]: I0213 19:28:27.902452 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2d7491312bcda4fb1d357d427d514ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2d7491312bcda4fb1d357d427d514ed\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:27.902498 kubelet[2265]: I0213 19:28:27.902472 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:27.902498 kubelet[2265]: I0213 19:28:27.902488 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:27.902571 kubelet[2265]: I0213 19:28:27.902523 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:27.902571 kubelet[2265]: I0213 19:28:27.902540 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2d7491312bcda4fb1d357d427d514ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2d7491312bcda4fb1d357d427d514ed\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:27.902571 kubelet[2265]: I0213 19:28:27.902556 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:27.902571 kubelet[2265]: I0213 19:28:27.902569 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:27.902655 kubelet[2265]: I0213 19:28:27.902582 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:28.144351 kubelet[2265]: E0213 19:28:28.144321 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:28.144821 containerd[1509]: time="2025-02-13T19:28:28.144793141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:28:28.166179 kubelet[2265]: E0213 19:28:28.166149 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:28.166523 containerd[1509]: time="2025-02-13T19:28:28.166466318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:28:28.169811 kubelet[2265]: E0213 19:28:28.169787 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:28.170092 containerd[1509]: time="2025-02-13T19:28:28.170067300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2d7491312bcda4fb1d357d427d514ed,Namespace:kube-system,Attempt:0,}" Feb 13 19:28:28.272730 kubelet[2265]: I0213 19:28:28.272682 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:28:28.273034 kubelet[2265]: E0213 19:28:28.272999 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Feb 13 19:28:28.456784 kubelet[2265]: W0213 19:28:28.456594 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:28:28.456784 kubelet[2265]: E0213 19:28:28.456680 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:28.500751 kubelet[2265]: E0213 19:28:28.500692 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Feb 13 19:28:28.514458 kubelet[2265]: W0213 19:28:28.514399 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:28:28.514458 kubelet[2265]: E0213 19:28:28.514456 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:28.677475 kubelet[2265]: W0213 19:28:28.677422 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:28:28.677475 kubelet[2265]: E0213 19:28:28.677469 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:29.074709 kubelet[2265]: I0213 19:28:29.074669 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:28:29.075145 kubelet[2265]: E0213 19:28:29.075109 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Feb 13 19:28:29.159350 kubelet[2265]: E0213 19:28:29.159297 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:28:29.825890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778539552.mount: Deactivated successfully. Feb 13 19:28:29.830662 containerd[1509]: time="2025-02-13T19:28:29.830617261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:29.833448 containerd[1509]: time="2025-02-13T19:28:29.833399237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:28:29.834352 containerd[1509]: time="2025-02-13T19:28:29.834296008Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:29.836239 containerd[1509]: time="2025-02-13T19:28:29.836211821Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:29.836965 containerd[1509]: time="2025-02-13T19:28:29.836931831Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:28:29.837964 containerd[1509]: time="2025-02-13T19:28:29.837932527Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:29.838799 containerd[1509]: time="2025-02-13T19:28:29.838775117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:28:29.840089 containerd[1509]: time="2025-02-13T19:28:29.840038245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:28:29.841211 containerd[1509]: time="2025-02-13T19:28:29.841185797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.671045119s" Feb 13 19:28:29.842397 containerd[1509]: time="2025-02-13T19:28:29.842354949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.69747729s" Feb 13 19:28:29.845250 containerd[1509]: time="2025-02-13T19:28:29.845230831Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.678713417s" Feb 13 19:28:30.000898 containerd[1509]: time="2025-02-13T19:28:30.000643079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:30.000898 containerd[1509]: time="2025-02-13T19:28:30.000694395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:30.000898 containerd[1509]: time="2025-02-13T19:28:30.000705687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:30.001171 containerd[1509]: time="2025-02-13T19:28:30.000744319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:30.001171 containerd[1509]: time="2025-02-13T19:28:30.000862781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:30.001171 containerd[1509]: time="2025-02-13T19:28:30.000872980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:30.001171 containerd[1509]: time="2025-02-13T19:28:30.001036377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:30.001369 containerd[1509]: time="2025-02-13T19:28:30.001311112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:30.001530 containerd[1509]: time="2025-02-13T19:28:30.000264810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:30.001530 containerd[1509]: time="2025-02-13T19:28:30.001482373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:30.001530 containerd[1509]: time="2025-02-13T19:28:30.001496760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:30.001640 containerd[1509]: time="2025-02-13T19:28:30.001561131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:30.022925 systemd[1]: Started cri-containerd-e96c3c017970908d5df78da219b969409d72b1bdaefb10a4294bf2c98a02106b.scope - libcontainer container e96c3c017970908d5df78da219b969409d72b1bdaefb10a4294bf2c98a02106b. Feb 13 19:28:30.026788 systemd[1]: Started cri-containerd-3375b7a7f2ef40fcbd859e11d8e46911bf6f5519e1424fb3886b3096a4297705.scope - libcontainer container 3375b7a7f2ef40fcbd859e11d8e46911bf6f5519e1424fb3886b3096a4297705. Feb 13 19:28:30.028409 systemd[1]: Started cri-containerd-7c954f319052db5b49030e668735e2daee763e30d977bb8ac36ca26e0063ffcd.scope - libcontainer container 7c954f319052db5b49030e668735e2daee763e30d977bb8ac36ca26e0063ffcd. Feb 13 19:28:30.062701 containerd[1509]: time="2025-02-13T19:28:30.062651926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2d7491312bcda4fb1d357d427d514ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"e96c3c017970908d5df78da219b969409d72b1bdaefb10a4294bf2c98a02106b\"" Feb 13 19:28:30.063858 kubelet[2265]: E0213 19:28:30.063803 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:30.066782 containerd[1509]: time="2025-02-13T19:28:30.066748547Z" level=info msg="CreateContainer within sandbox \"e96c3c017970908d5df78da219b969409d72b1bdaefb10a4294bf2c98a02106b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:28:30.071629 containerd[1509]: time="2025-02-13T19:28:30.071533569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"3375b7a7f2ef40fcbd859e11d8e46911bf6f5519e1424fb3886b3096a4297705\"" Feb 13 19:28:30.072268 containerd[1509]: time="2025-02-13T19:28:30.072001556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c954f319052db5b49030e668735e2daee763e30d977bb8ac36ca26e0063ffcd\"" Feb 13 19:28:30.072320 kubelet[2265]: E0213 19:28:30.072106 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:30.072848 kubelet[2265]: E0213 19:28:30.072668 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:30.074864 containerd[1509]: time="2025-02-13T19:28:30.074821363Z" level=info msg="CreateContainer within sandbox \"7c954f319052db5b49030e668735e2daee763e30d977bb8ac36ca26e0063ffcd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:28:30.075029 containerd[1509]: time="2025-02-13T19:28:30.074967027Z" level=info msg="CreateContainer within sandbox \"3375b7a7f2ef40fcbd859e11d8e46911bf6f5519e1424fb3886b3096a4297705\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:28:30.096118 containerd[1509]: time="2025-02-13T19:28:30.095993611Z" level=info msg="CreateContainer within sandbox \"e96c3c017970908d5df78da219b969409d72b1bdaefb10a4294bf2c98a02106b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4ca18a5be979b8383458c64adcbc8444b8bf31f16d5865da383085a22d4cc784\"" Feb 13 19:28:30.096724 containerd[1509]: time="2025-02-13T19:28:30.096690989Z" level=info msg="StartContainer for \"4ca18a5be979b8383458c64adcbc8444b8bf31f16d5865da383085a22d4cc784\"" Feb 13 19:28:30.100384 containerd[1509]: time="2025-02-13T19:28:30.100349819Z" level=info msg="CreateContainer within sandbox \"7c954f319052db5b49030e668735e2daee763e30d977bb8ac36ca26e0063ffcd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c91c6896d8f84d255494b02439e9861000557fe98f5e7ad92e2ba0b348eb75e\"" Feb 13 19:28:30.101169 kubelet[2265]: E0213 19:28:30.101134 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="3.2s" Feb 13 19:28:30.101542 containerd[1509]: time="2025-02-13T19:28:30.101245118Z" level=info msg="StartContainer for \"9c91c6896d8f84d255494b02439e9861000557fe98f5e7ad92e2ba0b348eb75e\"" Feb 13 19:28:30.101935 containerd[1509]: time="2025-02-13T19:28:30.101901338Z" level=info msg="CreateContainer within sandbox \"3375b7a7f2ef40fcbd859e11d8e46911bf6f5519e1424fb3886b3096a4297705\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1578b7e46006166323055e94ce0980d0bdd33a7e48882008c362e0e63574cf82\"" Feb 13 19:28:30.102355 containerd[1509]: time="2025-02-13T19:28:30.102296369Z" level=info msg="StartContainer for \"1578b7e46006166323055e94ce0980d0bdd33a7e48882008c362e0e63574cf82\"" Feb 13 19:28:30.125008 systemd[1]: Started cri-containerd-4ca18a5be979b8383458c64adcbc8444b8bf31f16d5865da383085a22d4cc784.scope - libcontainer container 4ca18a5be979b8383458c64adcbc8444b8bf31f16d5865da383085a22d4cc784. Feb 13 19:28:30.128092 systemd[1]: Started cri-containerd-1578b7e46006166323055e94ce0980d0bdd33a7e48882008c362e0e63574cf82.scope - libcontainer container 1578b7e46006166323055e94ce0980d0bdd33a7e48882008c362e0e63574cf82. Feb 13 19:28:30.132291 systemd[1]: Started cri-containerd-9c91c6896d8f84d255494b02439e9861000557fe98f5e7ad92e2ba0b348eb75e.scope - libcontainer container 9c91c6896d8f84d255494b02439e9861000557fe98f5e7ad92e2ba0b348eb75e. Feb 13 19:28:30.175433 containerd[1509]: time="2025-02-13T19:28:30.175382686Z" level=info msg="StartContainer for \"4ca18a5be979b8383458c64adcbc8444b8bf31f16d5865da383085a22d4cc784\" returns successfully" Feb 13 19:28:30.175563 containerd[1509]: time="2025-02-13T19:28:30.175466192Z" level=info msg="StartContainer for \"1578b7e46006166323055e94ce0980d0bdd33a7e48882008c362e0e63574cf82\" returns successfully" Feb 13 19:28:30.181792 containerd[1509]: time="2025-02-13T19:28:30.181738353Z" level=info msg="StartContainer for \"9c91c6896d8f84d255494b02439e9861000557fe98f5e7ad92e2ba0b348eb75e\" returns successfully" Feb 13 19:28:30.677342 kubelet[2265]: I0213 19:28:30.677093 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:28:31.132593 kubelet[2265]: E0213 19:28:31.132553 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:28:31.133036 kubelet[2265]: E0213 19:28:31.132663 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:31.134641 kubelet[2265]: E0213 19:28:31.134616 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:28:31.134723 kubelet[2265]: E0213 19:28:31.134709 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:31.136587 kubelet[2265]: E0213 19:28:31.136552 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:28:31.136726 kubelet[2265]: E0213 19:28:31.136676 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:31.586802 kubelet[2265]: I0213 19:28:31.586742 2265 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:28:31.598837 kubelet[2265]: I0213 19:28:31.598647 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:31.604912 kubelet[2265]: E0213 19:28:31.604783 2265 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:31.604912 kubelet[2265]: I0213 19:28:31.604804 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:31.606158 kubelet[2265]: E0213 19:28:31.606098 2265 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:31.606158 kubelet[2265]: I0213 19:28:31.606115 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:31.607568 kubelet[2265]: E0213 19:28:31.607548 2265 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:32.093513 kubelet[2265]: I0213 19:28:32.093480 2265 apiserver.go:52] "Watching apiserver" Feb 13 19:28:32.097752 kubelet[2265]: I0213 19:28:32.097732 2265 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:28:32.136727 kubelet[2265]: I0213 19:28:32.136702 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:32.137096 kubelet[2265]: I0213 19:28:32.136795 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:32.137096 kubelet[2265]: I0213 19:28:32.136892 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:32.138530 kubelet[2265]: E0213 19:28:32.138476 2265 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:32.138636 kubelet[2265]: E0213 19:28:32.138600 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:32.138685 kubelet[2265]: E0213 19:28:32.138668 2265 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:32.138724 kubelet[2265]: E0213 19:28:32.138672 2265 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:32.138798 kubelet[2265]: E0213 19:28:32.138781 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:32.138851 kubelet[2265]: E0213 19:28:32.138831 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:33.137572 kubelet[2265]: I0213 19:28:33.137547 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:33.137961 kubelet[2265]: I0213 19:28:33.137662 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:33.142374 kubelet[2265]: E0213 19:28:33.142316 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:33.144098 kubelet[2265]: E0213 19:28:33.143985 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:33.187695 systemd[1]: Reload requested from client PID 2544 ('systemctl') (unit session-9.scope)... Feb 13 19:28:33.187709 systemd[1]: Reloading... Feb 13 19:28:33.270799 zram_generator::config[2594]: No configuration found. Feb 13 19:28:33.374198 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:28:33.488721 systemd[1]: Reloading finished in 300 ms. Feb 13 19:28:33.512659 kubelet[2265]: I0213 19:28:33.512631 2265 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:28:33.512833 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:33.522369 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:28:33.522659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:33.522709 systemd[1]: kubelet.service: Consumed 784ms CPU time, 131.1M memory peak. Feb 13 19:28:33.537023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:28:33.704181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:28:33.708578 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:28:33.744812 kubelet[2633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:28:33.744812 kubelet[2633]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:28:33.744812 kubelet[2633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:28:33.745216 kubelet[2633]: I0213 19:28:33.744800 2633 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:28:33.751101 kubelet[2633]: I0213 19:28:33.751075 2633 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:28:33.751101 kubelet[2633]: I0213 19:28:33.751093 2633 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:28:33.751284 kubelet[2633]: I0213 19:28:33.751268 2633 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:28:33.752240 kubelet[2633]: I0213 19:28:33.752220 2633 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:28:33.754242 kubelet[2633]: I0213 19:28:33.754224 2633 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:28:33.756506 kubelet[2633]: E0213 19:28:33.756472 2633 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:28:33.756506 kubelet[2633]: I0213 19:28:33.756497 2633 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:28:33.761927 kubelet[2633]: I0213 19:28:33.761903 2633 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:28:33.762160 kubelet[2633]: I0213 19:28:33.762121 2633 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:28:33.762288 kubelet[2633]: I0213 19:28:33.762152 2633 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:28:33.762374 kubelet[2633]: I0213 19:28:33.762296 2633 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:28:33.762374 kubelet[2633]: I0213 19:28:33.762307 2633 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:28:33.762374 kubelet[2633]: I0213 19:28:33.762349 2633 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:28:33.762530 kubelet[2633]: I0213 19:28:33.762489 2633 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:28:33.762530 kubelet[2633]: I0213 19:28:33.762499 2633 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:28:33.762530 kubelet[2633]: I0213 19:28:33.762514 2633 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:28:33.762530 kubelet[2633]: I0213 19:28:33.762522 2633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:28:33.765786 kubelet[2633]: I0213 19:28:33.763168 2633 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:28:33.765786 kubelet[2633]: I0213 19:28:33.763557 2633 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:28:33.765786 kubelet[2633]: I0213 19:28:33.764029 2633 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:28:33.765786 kubelet[2633]: I0213 19:28:33.764064 2633 server.go:1287] "Started kubelet" Feb 13 19:28:33.765786 kubelet[2633]: I0213 19:28:33.765338 2633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:28:33.765786 kubelet[2633]: I0213 19:28:33.765577 2633 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:28:33.765786 kubelet[2633]: I0213 19:28:33.765620 2633 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:28:33.766357 kubelet[2633]: I0213 19:28:33.766334 2633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:28:33.767480 kubelet[2633]: E0213 19:28:33.767458 2633 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:28:33.768508 kubelet[2633]: I0213 19:28:33.766518 2633 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:28:33.769072 kubelet[2633]: E0213 19:28:33.768918 2633 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:28:33.769129 kubelet[2633]: I0213 19:28:33.769103 2633 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:28:33.769289 kubelet[2633]: I0213 19:28:33.769269 2633 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:28:33.769447 kubelet[2633]: I0213 19:28:33.769431 2633 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:28:33.770546 kubelet[2633]: I0213 19:28:33.770528 2633 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:28:33.771042 kubelet[2633]: I0213 19:28:33.771015 2633 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:28:33.771234 kubelet[2633]: I0213 19:28:33.771213 2633 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:28:33.773615 kubelet[2633]: I0213 19:28:33.773568 2633 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:28:33.781609 kubelet[2633]: I0213 19:28:33.780867 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:28:33.783025 kubelet[2633]: I0213 19:28:33.782109 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:28:33.783025 kubelet[2633]: I0213 19:28:33.782126 2633 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:28:33.783025 kubelet[2633]: I0213 19:28:33.782144 2633 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:28:33.783025 kubelet[2633]: I0213 19:28:33.782150 2633 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:28:33.783025 kubelet[2633]: E0213 19:28:33.782195 2633 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:28:33.806995 kubelet[2633]: I0213 19:28:33.806962 2633 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:28:33.806995 kubelet[2633]: I0213 19:28:33.806981 2633 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:28:33.806995 kubelet[2633]: I0213 19:28:33.807001 2633 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:28:33.807194 kubelet[2633]: I0213 19:28:33.807180 2633 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:28:33.807234 kubelet[2633]: I0213 19:28:33.807193 2633 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:28:33.807234 kubelet[2633]: I0213 19:28:33.807215 2633 policy_none.go:49] "None policy: Start" Feb 13 19:28:33.807234 kubelet[2633]: I0213 19:28:33.807225 2633 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:28:33.807342 kubelet[2633]: I0213 19:28:33.807237 2633 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:28:33.807378 kubelet[2633]: I0213 19:28:33.807363 2633 state_mem.go:75] "Updated machine memory state" Feb 13 19:28:33.810959 kubelet[2633]: I0213 19:28:33.810932 2633 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:28:33.811298 kubelet[2633]: I0213 19:28:33.811122 2633 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:28:33.811298 kubelet[2633]: I0213 19:28:33.811139 2633 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:28:33.811387 kubelet[2633]: I0213 19:28:33.811343 2633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:28:33.812064 kubelet[2633]: E0213 19:28:33.812049 2633 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:28:33.883377 kubelet[2633]: I0213 19:28:33.883335 2633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:33.883518 kubelet[2633]: I0213 19:28:33.883473 2633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:33.883518 kubelet[2633]: I0213 19:28:33.883491 2633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:33.889389 kubelet[2633]: E0213 19:28:33.889221 2633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:33.889534 kubelet[2633]: E0213 19:28:33.889517 2633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:33.915641 kubelet[2633]: I0213 19:28:33.915611 2633 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:28:33.920600 kubelet[2633]: I0213 19:28:33.920572 2633 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:28:33.920709 kubelet[2633]: I0213 19:28:33.920633 2633 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:28:34.071327 kubelet[2633]: I0213 19:28:34.071292 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:34.071327 kubelet[2633]: I0213 19:28:34.071325 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2d7491312bcda4fb1d357d427d514ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2d7491312bcda4fb1d357d427d514ed\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:34.071467 kubelet[2633]: I0213 19:28:34.071343 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2d7491312bcda4fb1d357d427d514ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2d7491312bcda4fb1d357d427d514ed\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:34.071467 kubelet[2633]: I0213 19:28:34.071361 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:34.071467 kubelet[2633]: I0213 19:28:34.071444 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:34.071543 kubelet[2633]: I0213 19:28:34.071474 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2d7491312bcda4fb1d357d427d514ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2d7491312bcda4fb1d357d427d514ed\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:34.071543 kubelet[2633]: I0213 19:28:34.071494 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:34.071543 kubelet[2633]: I0213 19:28:34.071513 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:34.071543 kubelet[2633]: I0213 19:28:34.071535 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:28:34.190286 kubelet[2633]: E0213 19:28:34.190130 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:34.190286 kubelet[2633]: E0213 19:28:34.190187 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:34.190286 kubelet[2633]: E0213 19:28:34.190187 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:34.763809 kubelet[2633]: I0213 19:28:34.763775 2633 apiserver.go:52] "Watching apiserver" Feb 13 19:28:34.769982 kubelet[2633]: I0213 19:28:34.769927 2633 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:28:34.795754 kubelet[2633]: I0213 19:28:34.794231 2633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:34.795754 kubelet[2633]: E0213 19:28:34.794620 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:34.795754 kubelet[2633]: I0213 19:28:34.795047 2633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:34.809064 kubelet[2633]: E0213 19:28:34.809014 2633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:28:34.809220 kubelet[2633]: E0213 19:28:34.809201 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:34.809294 kubelet[2633]: E0213 19:28:34.809278 2633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:28:34.809367 kubelet[2633]: E0213 19:28:34.809351 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:34.853329 kubelet[2633]: I0213 19:28:34.853266 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8532473029999998 podStartE2EDuration="1.853247303s" podCreationTimestamp="2025-02-13 19:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:28:34.845532157 +0000 UTC m=+1.132552090" watchObservedRunningTime="2025-02-13 19:28:34.853247303 +0000 UTC m=+1.140267236" Feb 13 19:28:34.853491 kubelet[2633]: I0213 19:28:34.853384 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.853377664 podStartE2EDuration="1.853377664s" podCreationTimestamp="2025-02-13 19:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:28:34.853064022 +0000 UTC m=+1.140083955" watchObservedRunningTime="2025-02-13 19:28:34.853377664 +0000 UTC m=+1.140397597" Feb 13 19:28:34.871112 kubelet[2633]: I0213 19:28:34.871044 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.871022859 podStartE2EDuration="1.871022859s" podCreationTimestamp="2025-02-13 19:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:28:34.863081989 +0000 UTC m=+1.150101922" watchObservedRunningTime="2025-02-13 19:28:34.871022859 +0000 UTC m=+1.158042792" Feb 13 19:28:35.795939 kubelet[2633]: E0213 19:28:35.795907 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:35.797039 kubelet[2633]: E0213 19:28:35.797009 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:36.797239 kubelet[2633]: E0213 19:28:36.797198 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:37.310123 kubelet[2633]: E0213 19:28:37.310060 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:38.519825 sudo[1709]: pam_unix(sudo:session): session closed for user root Feb 13 19:28:38.521349 sshd[1708]: Connection closed by 10.0.0.1 port 53298 Feb 13 19:28:38.521569 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Feb 13 19:28:38.528299 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:53298.service: Deactivated successfully. Feb 13 19:28:38.530647 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:28:38.530875 systemd[1]: session-9.scope: Consumed 4.067s CPU time, 209.2M memory peak. Feb 13 19:28:38.532010 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:28:38.532903 systemd-logind[1491]: Removed session 9. Feb 13 19:28:39.848076 kubelet[2633]: I0213 19:28:39.848041 2633 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:28:39.848493 containerd[1509]: time="2025-02-13T19:28:39.848378359Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:28:39.848721 kubelet[2633]: I0213 19:28:39.848557 2633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:28:39.945321 kubelet[2633]: E0213 19:28:39.945286 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:40.433037 systemd[1]: Created slice kubepods-besteffort-podf9d2b8c5_52ba_44a2_8817_072dc996d3b4.slice - libcontainer container kubepods-besteffort-podf9d2b8c5_52ba_44a2_8817_072dc996d3b4.slice. Feb 13 19:28:40.516419 kubelet[2633]: I0213 19:28:40.516359 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9d2b8c5-52ba-44a2-8817-072dc996d3b4-kube-proxy\") pod \"kube-proxy-bxx67\" (UID: \"f9d2b8c5-52ba-44a2-8817-072dc996d3b4\") " pod="kube-system/kube-proxy-bxx67" Feb 13 19:28:40.516419 kubelet[2633]: I0213 19:28:40.516412 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9d2b8c5-52ba-44a2-8817-072dc996d3b4-xtables-lock\") pod \"kube-proxy-bxx67\" (UID: \"f9d2b8c5-52ba-44a2-8817-072dc996d3b4\") " pod="kube-system/kube-proxy-bxx67" Feb 13 19:28:40.516419 kubelet[2633]: I0213 19:28:40.516429 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9d2b8c5-52ba-44a2-8817-072dc996d3b4-lib-modules\") pod \"kube-proxy-bxx67\" (UID: \"f9d2b8c5-52ba-44a2-8817-072dc996d3b4\") " pod="kube-system/kube-proxy-bxx67" Feb 13 19:28:40.516603 kubelet[2633]: I0213 19:28:40.516446 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfv6n\" (UniqueName: \"kubernetes.io/projected/f9d2b8c5-52ba-44a2-8817-072dc996d3b4-kube-api-access-kfv6n\") pod \"kube-proxy-bxx67\" (UID: \"f9d2b8c5-52ba-44a2-8817-072dc996d3b4\") " pod="kube-system/kube-proxy-bxx67" Feb 13 19:28:40.621540 kubelet[2633]: E0213 19:28:40.621493 2633 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:28:40.621540 kubelet[2633]: E0213 19:28:40.621523 2633 projected.go:194] Error preparing data for projected volume kube-api-access-kfv6n for pod kube-system/kube-proxy-bxx67: configmap "kube-root-ca.crt" not found Feb 13 19:28:40.621723 kubelet[2633]: E0213 19:28:40.621574 2633 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9d2b8c5-52ba-44a2-8817-072dc996d3b4-kube-api-access-kfv6n podName:f9d2b8c5-52ba-44a2-8817-072dc996d3b4 nodeName:}" failed. No retries permitted until 2025-02-13 19:28:41.121555973 +0000 UTC m=+7.408575906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kfv6n" (UniqueName: "kubernetes.io/projected/f9d2b8c5-52ba-44a2-8817-072dc996d3b4-kube-api-access-kfv6n") pod "kube-proxy-bxx67" (UID: "f9d2b8c5-52ba-44a2-8817-072dc996d3b4") : configmap "kube-root-ca.crt" not found Feb 13 19:28:40.941448 systemd[1]: Created slice kubepods-besteffort-podafd705e8_ec23_4567_9c63_e4951722f545.slice - libcontainer container kubepods-besteffort-podafd705e8_ec23_4567_9c63_e4951722f545.slice. Feb 13 19:28:41.020137 kubelet[2633]: I0213 19:28:41.020096 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c486\" (UniqueName: \"kubernetes.io/projected/afd705e8-ec23-4567-9c63-e4951722f545-kube-api-access-8c486\") pod \"tigera-operator-7d68577dc5-wzhg6\" (UID: \"afd705e8-ec23-4567-9c63-e4951722f545\") " pod="tigera-operator/tigera-operator-7d68577dc5-wzhg6" Feb 13 19:28:41.020137 kubelet[2633]: I0213 19:28:41.020132 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/afd705e8-ec23-4567-9c63-e4951722f545-var-lib-calico\") pod \"tigera-operator-7d68577dc5-wzhg6\" (UID: \"afd705e8-ec23-4567-9c63-e4951722f545\") " pod="tigera-operator/tigera-operator-7d68577dc5-wzhg6" Feb 13 19:28:41.245201 containerd[1509]: time="2025-02-13T19:28:41.245061840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-wzhg6,Uid:afd705e8-ec23-4567-9c63-e4951722f545,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:28:41.268346 containerd[1509]: time="2025-02-13T19:28:41.267677187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:41.268346 containerd[1509]: time="2025-02-13T19:28:41.267750847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:41.268346 containerd[1509]: time="2025-02-13T19:28:41.267804059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:41.268503 containerd[1509]: time="2025-02-13T19:28:41.268414672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:41.290925 systemd[1]: Started cri-containerd-e0cad05e6d3c9f14661a5ec867947a42c6dcedfca386d317781dab3d79265bd3.scope - libcontainer container e0cad05e6d3c9f14661a5ec867947a42c6dcedfca386d317781dab3d79265bd3. Feb 13 19:28:41.328072 containerd[1509]: time="2025-02-13T19:28:41.328025423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-wzhg6,Uid:afd705e8-ec23-4567-9c63-e4951722f545,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e0cad05e6d3c9f14661a5ec867947a42c6dcedfca386d317781dab3d79265bd3\"" Feb 13 19:28:41.329782 containerd[1509]: time="2025-02-13T19:28:41.329734016Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:28:41.341479 kubelet[2633]: E0213 19:28:41.341444 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:41.341902 containerd[1509]: time="2025-02-13T19:28:41.341854462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxx67,Uid:f9d2b8c5-52ba-44a2-8817-072dc996d3b4,Namespace:kube-system,Attempt:0,}" Feb 13 19:28:41.362957 containerd[1509]: time="2025-02-13T19:28:41.362857390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:41.363511 containerd[1509]: time="2025-02-13T19:28:41.363470096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:41.363511 containerd[1509]: time="2025-02-13T19:28:41.363488160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:41.363603 containerd[1509]: time="2025-02-13T19:28:41.363566950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:41.383895 systemd[1]: Started cri-containerd-0eee6c975fc318e218c6f5e60ea7693966cebd30d83b78a8cc89e41b6e086861.scope - libcontainer container 0eee6c975fc318e218c6f5e60ea7693966cebd30d83b78a8cc89e41b6e086861. Feb 13 19:28:41.405299 containerd[1509]: time="2025-02-13T19:28:41.405230222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxx67,Uid:f9d2b8c5-52ba-44a2-8817-072dc996d3b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eee6c975fc318e218c6f5e60ea7693966cebd30d83b78a8cc89e41b6e086861\"" Feb 13 19:28:41.406032 kubelet[2633]: E0213 19:28:41.406008 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:41.408158 containerd[1509]: time="2025-02-13T19:28:41.408074789Z" level=info msg="CreateContainer within sandbox \"0eee6c975fc318e218c6f5e60ea7693966cebd30d83b78a8cc89e41b6e086861\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:28:41.426044 containerd[1509]: time="2025-02-13T19:28:41.425990267Z" level=info msg="CreateContainer within sandbox \"0eee6c975fc318e218c6f5e60ea7693966cebd30d83b78a8cc89e41b6e086861\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db33f398de680c43e7b69ce7a3604944d5c274da8c6c16373be943098d4496ce\"" Feb 13 19:28:41.426614 containerd[1509]: time="2025-02-13T19:28:41.426569039Z" level=info msg="StartContainer for \"db33f398de680c43e7b69ce7a3604944d5c274da8c6c16373be943098d4496ce\"" Feb 13 19:28:41.454895 systemd[1]: Started cri-containerd-db33f398de680c43e7b69ce7a3604944d5c274da8c6c16373be943098d4496ce.scope - libcontainer container db33f398de680c43e7b69ce7a3604944d5c274da8c6c16373be943098d4496ce. Feb 13 19:28:41.486517 containerd[1509]: time="2025-02-13T19:28:41.486478941Z" level=info msg="StartContainer for \"db33f398de680c43e7b69ce7a3604944d5c274da8c6c16373be943098d4496ce\" returns successfully" Feb 13 19:28:41.804724 kubelet[2633]: E0213 19:28:41.804699 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:44.021065 update_engine[1499]: I20250213 19:28:44.020983 1499 update_attempter.cc:509] Updating boot flags... Feb 13 19:28:44.054502 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2977) Feb 13 19:28:44.084843 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2978) Feb 13 19:28:45.157931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781744343.mount: Deactivated successfully. Feb 13 19:28:45.696274 containerd[1509]: time="2025-02-13T19:28:45.696224179Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:45.696967 containerd[1509]: time="2025-02-13T19:28:45.696909870Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:28:45.698120 containerd[1509]: time="2025-02-13T19:28:45.698067206Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:45.700076 containerd[1509]: time="2025-02-13T19:28:45.700042563Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:45.700742 containerd[1509]: time="2025-02-13T19:28:45.700693509Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.370902132s" Feb 13 19:28:45.700742 containerd[1509]: time="2025-02-13T19:28:45.700731029Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:28:45.702670 containerd[1509]: time="2025-02-13T19:28:45.702635403Z" level=info msg="CreateContainer within sandbox \"e0cad05e6d3c9f14661a5ec867947a42c6dcedfca386d317781dab3d79265bd3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:28:45.714331 containerd[1509]: time="2025-02-13T19:28:45.714288051Z" level=info msg="CreateContainer within sandbox \"e0cad05e6d3c9f14661a5ec867947a42c6dcedfca386d317781dab3d79265bd3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"978cb3c8297e2bf57808002ec3fa3b9c8a30181e8d85ea63009affb88c9108f2\"" Feb 13 19:28:45.714814 containerd[1509]: time="2025-02-13T19:28:45.714719280Z" level=info msg="StartContainer for \"978cb3c8297e2bf57808002ec3fa3b9c8a30181e8d85ea63009affb88c9108f2\"" Feb 13 19:28:45.748892 systemd[1]: Started cri-containerd-978cb3c8297e2bf57808002ec3fa3b9c8a30181e8d85ea63009affb88c9108f2.scope - libcontainer container 978cb3c8297e2bf57808002ec3fa3b9c8a30181e8d85ea63009affb88c9108f2. Feb 13 19:28:45.832971 containerd[1509]: time="2025-02-13T19:28:45.832911538Z" level=info msg="StartContainer for \"978cb3c8297e2bf57808002ec3fa3b9c8a30181e8d85ea63009affb88c9108f2\" returns successfully" Feb 13 19:28:45.898846 kubelet[2633]: E0213 19:28:45.898796 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:45.907918 kubelet[2633]: I0213 19:28:45.907858 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bxx67" podStartSLOduration=5.907842626 podStartE2EDuration="5.907842626s" podCreationTimestamp="2025-02-13 19:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:28:41.855944762 +0000 UTC m=+8.142964696" watchObservedRunningTime="2025-02-13 19:28:45.907842626 +0000 UTC m=+12.194862559" Feb 13 19:28:46.840211 kubelet[2633]: E0213 19:28:46.840176 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:46.847998 kubelet[2633]: I0213 19:28:46.847945 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-wzhg6" podStartSLOduration=2.4756896409999998 podStartE2EDuration="6.84792679s" podCreationTimestamp="2025-02-13 19:28:40 +0000 UTC" firstStartedPulling="2025-02-13 19:28:41.329278419 +0000 UTC m=+7.616298352" lastFinishedPulling="2025-02-13 19:28:45.701515568 +0000 UTC m=+11.988535501" observedRunningTime="2025-02-13 19:28:46.847783599 +0000 UTC m=+13.134803532" watchObservedRunningTime="2025-02-13 19:28:46.84792679 +0000 UTC m=+13.134946723" Feb 13 19:28:47.314594 kubelet[2633]: E0213 19:28:47.314560 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:48.656799 systemd[1]: Created slice kubepods-besteffort-pod3fee3e96_f188_45c7_9302_3ea17564025c.slice - libcontainer container kubepods-besteffort-pod3fee3e96_f188_45c7_9302_3ea17564025c.slice. Feb 13 19:28:48.671093 kubelet[2633]: I0213 19:28:48.671042 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fee3e96-f188-45c7-9302-3ea17564025c-tigera-ca-bundle\") pod \"calico-typha-5d87594d47-mrvrm\" (UID: \"3fee3e96-f188-45c7-9302-3ea17564025c\") " pod="calico-system/calico-typha-5d87594d47-mrvrm" Feb 13 19:28:48.671093 kubelet[2633]: I0213 19:28:48.671102 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7zqg\" (UniqueName: \"kubernetes.io/projected/3fee3e96-f188-45c7-9302-3ea17564025c-kube-api-access-f7zqg\") pod \"calico-typha-5d87594d47-mrvrm\" (UID: \"3fee3e96-f188-45c7-9302-3ea17564025c\") " pod="calico-system/calico-typha-5d87594d47-mrvrm" Feb 13 19:28:48.671620 kubelet[2633]: I0213 19:28:48.671130 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3fee3e96-f188-45c7-9302-3ea17564025c-typha-certs\") pod \"calico-typha-5d87594d47-mrvrm\" (UID: \"3fee3e96-f188-45c7-9302-3ea17564025c\") " pod="calico-system/calico-typha-5d87594d47-mrvrm" Feb 13 19:28:48.692682 systemd[1]: Created slice kubepods-besteffort-pod9fe260ae_d144_4e4a_80b0_dab45f8cece3.slice - libcontainer container kubepods-besteffort-pod9fe260ae_d144_4e4a_80b0_dab45f8cece3.slice. Feb 13 19:28:48.771514 kubelet[2633]: I0213 19:28:48.771466 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-var-run-calico\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771514 kubelet[2633]: I0213 19:28:48.771508 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-lib-modules\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771514 kubelet[2633]: I0213 19:28:48.771522 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe260ae-d144-4e4a-80b0-dab45f8cece3-tigera-ca-bundle\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771754 kubelet[2633]: I0213 19:28:48.771535 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9fe260ae-d144-4e4a-80b0-dab45f8cece3-node-certs\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771754 kubelet[2633]: I0213 19:28:48.771564 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-cni-bin-dir\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771754 kubelet[2633]: I0213 19:28:48.771610 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-cni-net-dir\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771754 kubelet[2633]: I0213 19:28:48.771717 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-flexvol-driver-host\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771754 kubelet[2633]: I0213 19:28:48.771745 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-xtables-lock\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771940 kubelet[2633]: I0213 19:28:48.771775 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g777l\" (UniqueName: \"kubernetes.io/projected/9fe260ae-d144-4e4a-80b0-dab45f8cece3-kube-api-access-g777l\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771940 kubelet[2633]: I0213 19:28:48.771791 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-policysync\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771940 kubelet[2633]: I0213 19:28:48.771804 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-var-lib-calico\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.771940 kubelet[2633]: I0213 19:28:48.771828 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9fe260ae-d144-4e4a-80b0-dab45f8cece3-cni-log-dir\") pod \"calico-node-m8v5l\" (UID: \"9fe260ae-d144-4e4a-80b0-dab45f8cece3\") " pod="calico-system/calico-node-m8v5l" Feb 13 19:28:48.796408 kubelet[2633]: E0213 19:28:48.796354 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:28:48.873032 kubelet[2633]: I0213 19:28:48.872979 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5eaecbe6-c19b-4299-995d-b27991011c1a-registration-dir\") pod \"csi-node-driver-k6nvb\" (UID: \"5eaecbe6-c19b-4299-995d-b27991011c1a\") " pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:28:48.873747 kubelet[2633]: I0213 19:28:48.873219 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eaecbe6-c19b-4299-995d-b27991011c1a-kubelet-dir\") pod \"csi-node-driver-k6nvb\" (UID: \"5eaecbe6-c19b-4299-995d-b27991011c1a\") " pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:28:48.873747 kubelet[2633]: I0213 19:28:48.873285 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4zbh\" (UniqueName: \"kubernetes.io/projected/5eaecbe6-c19b-4299-995d-b27991011c1a-kube-api-access-l4zbh\") pod \"csi-node-driver-k6nvb\" (UID: \"5eaecbe6-c19b-4299-995d-b27991011c1a\") " pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:28:48.873747 kubelet[2633]: I0213 19:28:48.873304 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5eaecbe6-c19b-4299-995d-b27991011c1a-varrun\") pod \"csi-node-driver-k6nvb\" (UID: \"5eaecbe6-c19b-4299-995d-b27991011c1a\") " pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:28:48.873747 kubelet[2633]: I0213 19:28:48.873316 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5eaecbe6-c19b-4299-995d-b27991011c1a-socket-dir\") pod \"csi-node-driver-k6nvb\" (UID: \"5eaecbe6-c19b-4299-995d-b27991011c1a\") " pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:28:48.874542 kubelet[2633]: E0213 19:28:48.874514 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.874598 kubelet[2633]: W0213 19:28:48.874531 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.874598 kubelet[2633]: E0213 19:28:48.874566 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.875022 kubelet[2633]: E0213 19:28:48.874831 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.875022 kubelet[2633]: W0213 19:28:48.874868 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.875022 kubelet[2633]: E0213 19:28:48.874894 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.875185 kubelet[2633]: E0213 19:28:48.875164 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.875185 kubelet[2633]: W0213 19:28:48.875177 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.875245 kubelet[2633]: E0213 19:28:48.875209 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.875659 kubelet[2633]: E0213 19:28:48.875448 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.875659 kubelet[2633]: W0213 19:28:48.875458 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.875659 kubelet[2633]: E0213 19:28:48.875585 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.876009 kubelet[2633]: E0213 19:28:48.875995 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.876009 kubelet[2633]: W0213 19:28:48.876006 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.876140 kubelet[2633]: E0213 19:28:48.876113 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.876337 kubelet[2633]: E0213 19:28:48.876326 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.876414 kubelet[2633]: W0213 19:28:48.876403 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.876522 kubelet[2633]: E0213 19:28:48.876511 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.876880 kubelet[2633]: E0213 19:28:48.876780 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.876880 kubelet[2633]: W0213 19:28:48.876791 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.877001 kubelet[2633]: E0213 19:28:48.876986 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.877243 kubelet[2633]: E0213 19:28:48.877233 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.877329 kubelet[2633]: W0213 19:28:48.877318 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.877585 kubelet[2633]: E0213 19:28:48.877481 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.877701 kubelet[2633]: E0213 19:28:48.877690 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.877879 kubelet[2633]: W0213 19:28:48.877778 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.877879 kubelet[2633]: E0213 19:28:48.877791 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.878215 kubelet[2633]: E0213 19:28:48.878112 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.878215 kubelet[2633]: W0213 19:28:48.878133 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.878215 kubelet[2633]: E0213 19:28:48.878143 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.878558 kubelet[2633]: E0213 19:28:48.878494 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.878558 kubelet[2633]: W0213 19:28:48.878503 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.878735 kubelet[2633]: E0213 19:28:48.878642 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.879335 kubelet[2633]: E0213 19:28:48.879323 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.879469 kubelet[2633]: W0213 19:28:48.879396 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.879469 kubelet[2633]: E0213 19:28:48.879414 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.879926 kubelet[2633]: E0213 19:28:48.879816 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.879926 kubelet[2633]: W0213 19:28:48.879827 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.879926 kubelet[2633]: E0213 19:28:48.879841 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.880187 kubelet[2633]: E0213 19:28:48.880177 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.880243 kubelet[2633]: W0213 19:28:48.880233 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.880329 kubelet[2633]: E0213 19:28:48.880286 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.880548 kubelet[2633]: E0213 19:28:48.880513 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.880548 kubelet[2633]: W0213 19:28:48.880522 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.880548 kubelet[2633]: E0213 19:28:48.880530 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.883660 kubelet[2633]: E0213 19:28:48.883644 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.883813 kubelet[2633]: W0213 19:28:48.883743 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.883813 kubelet[2633]: E0213 19:28:48.883774 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.962680 kubelet[2633]: E0213 19:28:48.962566 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:48.963264 containerd[1509]: time="2025-02-13T19:28:48.962985801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d87594d47-mrvrm,Uid:3fee3e96-f188-45c7-9302-3ea17564025c,Namespace:calico-system,Attempt:0,}" Feb 13 19:28:48.974637 kubelet[2633]: E0213 19:28:48.974596 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.974637 kubelet[2633]: W0213 19:28:48.974615 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.974637 kubelet[2633]: E0213 19:28:48.974634 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.974955 kubelet[2633]: E0213 19:28:48.974916 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.974955 kubelet[2633]: W0213 19:28:48.974941 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.975127 kubelet[2633]: E0213 19:28:48.974966 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.975222 kubelet[2633]: E0213 19:28:48.975209 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.975222 kubelet[2633]: W0213 19:28:48.975219 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.975272 kubelet[2633]: E0213 19:28:48.975230 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.975436 kubelet[2633]: E0213 19:28:48.975414 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.975436 kubelet[2633]: W0213 19:28:48.975428 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.975487 kubelet[2633]: E0213 19:28:48.975442 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.975657 kubelet[2633]: E0213 19:28:48.975643 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.975657 kubelet[2633]: W0213 19:28:48.975654 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.975709 kubelet[2633]: E0213 19:28:48.975667 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.975860 kubelet[2633]: E0213 19:28:48.975847 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.975860 kubelet[2633]: W0213 19:28:48.975857 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.975919 kubelet[2633]: E0213 19:28:48.975869 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.976080 kubelet[2633]: E0213 19:28:48.976059 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.976080 kubelet[2633]: W0213 19:28:48.976076 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.976132 kubelet[2633]: E0213 19:28:48.976087 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.976343 kubelet[2633]: E0213 19:28:48.976328 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.976343 kubelet[2633]: W0213 19:28:48.976339 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.976496 kubelet[2633]: E0213 19:28:48.976351 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.976557 kubelet[2633]: E0213 19:28:48.976541 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.976557 kubelet[2633]: W0213 19:28:48.976551 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.976642 kubelet[2633]: E0213 19:28:48.976587 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.976733 kubelet[2633]: E0213 19:28:48.976721 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.976733 kubelet[2633]: W0213 19:28:48.976731 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.976810 kubelet[2633]: E0213 19:28:48.976754 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.976971 kubelet[2633]: E0213 19:28:48.976960 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.976971 kubelet[2633]: W0213 19:28:48.976969 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.977077 kubelet[2633]: E0213 19:28:48.976996 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.977193 kubelet[2633]: E0213 19:28:48.977177 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.977193 kubelet[2633]: W0213 19:28:48.977187 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.977267 kubelet[2633]: E0213 19:28:48.977201 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.977504 kubelet[2633]: E0213 19:28:48.977484 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.977504 kubelet[2633]: W0213 19:28:48.977499 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.977606 kubelet[2633]: E0213 19:28:48.977516 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.977736 kubelet[2633]: E0213 19:28:48.977721 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.977736 kubelet[2633]: W0213 19:28:48.977732 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.977876 kubelet[2633]: E0213 19:28:48.977746 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.978108 kubelet[2633]: E0213 19:28:48.978091 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.978108 kubelet[2633]: W0213 19:28:48.978104 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.978172 kubelet[2633]: E0213 19:28:48.978119 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.978345 kubelet[2633]: E0213 19:28:48.978332 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.978345 kubelet[2633]: W0213 19:28:48.978343 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.978392 kubelet[2633]: E0213 19:28:48.978384 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.978537 kubelet[2633]: E0213 19:28:48.978520 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.978537 kubelet[2633]: W0213 19:28:48.978532 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.978616 kubelet[2633]: E0213 19:28:48.978564 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.978741 kubelet[2633]: E0213 19:28:48.978719 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.978741 kubelet[2633]: W0213 19:28:48.978730 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.978808 kubelet[2633]: E0213 19:28:48.978742 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.978987 kubelet[2633]: E0213 19:28:48.978974 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.979017 kubelet[2633]: W0213 19:28:48.978986 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.979017 kubelet[2633]: E0213 19:28:48.978999 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.979220 kubelet[2633]: E0213 19:28:48.979205 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.979220 kubelet[2633]: W0213 19:28:48.979217 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.979283 kubelet[2633]: E0213 19:28:48.979231 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.979488 kubelet[2633]: E0213 19:28:48.979475 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.979488 kubelet[2633]: W0213 19:28:48.979485 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.979551 kubelet[2633]: E0213 19:28:48.979498 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.979720 kubelet[2633]: E0213 19:28:48.979702 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.979720 kubelet[2633]: W0213 19:28:48.979714 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.979822 kubelet[2633]: E0213 19:28:48.979726 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.979967 kubelet[2633]: E0213 19:28:48.979953 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.979967 kubelet[2633]: W0213 19:28:48.979963 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.980008 kubelet[2633]: E0213 19:28:48.979985 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.980153 kubelet[2633]: E0213 19:28:48.980135 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.980153 kubelet[2633]: W0213 19:28:48.980149 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.980216 kubelet[2633]: E0213 19:28:48.980160 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.980473 kubelet[2633]: E0213 19:28:48.980460 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.980473 kubelet[2633]: W0213 19:28:48.980471 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.980534 kubelet[2633]: E0213 19:28:48.980479 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.983020 kubelet[2633]: E0213 19:28:48.983008 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:48.983093 kubelet[2633]: W0213 19:28:48.983074 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:48.983093 kubelet[2633]: E0213 19:28:48.983088 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:48.997158 kubelet[2633]: E0213 19:28:48.997135 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:48.997574 containerd[1509]: time="2025-02-13T19:28:48.997536528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8v5l,Uid:9fe260ae-d144-4e4a-80b0-dab45f8cece3,Namespace:calico-system,Attempt:0,}" Feb 13 19:28:49.067118 containerd[1509]: time="2025-02-13T19:28:49.066949819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:49.067118 containerd[1509]: time="2025-02-13T19:28:49.067015904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:49.067118 containerd[1509]: time="2025-02-13T19:28:49.067031353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:49.067382 containerd[1509]: time="2025-02-13T19:28:49.066691760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:28:49.067382 containerd[1509]: time="2025-02-13T19:28:49.066903231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:28:49.067382 containerd[1509]: time="2025-02-13T19:28:49.066934289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:49.068010 containerd[1509]: time="2025-02-13T19:28:49.067707994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:49.069192 containerd[1509]: time="2025-02-13T19:28:49.068015124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:28:49.088918 systemd[1]: Started cri-containerd-ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a.scope - libcontainer container ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a. Feb 13 19:28:49.091983 systemd[1]: Started cri-containerd-7f0c1827305abc4d801ef48f424e6fa9cc495b4cb5ef0d5d95f27c2b1cdcb48e.scope - libcontainer container 7f0c1827305abc4d801ef48f424e6fa9cc495b4cb5ef0d5d95f27c2b1cdcb48e. Feb 13 19:28:49.113901 containerd[1509]: time="2025-02-13T19:28:49.113805882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8v5l,Uid:9fe260ae-d144-4e4a-80b0-dab45f8cece3,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a\"" Feb 13 19:28:49.114675 kubelet[2633]: E0213 19:28:49.114650 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:49.116009 containerd[1509]: time="2025-02-13T19:28:49.115988372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:28:49.136333 containerd[1509]: time="2025-02-13T19:28:49.136297401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d87594d47-mrvrm,Uid:3fee3e96-f188-45c7-9302-3ea17564025c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f0c1827305abc4d801ef48f424e6fa9cc495b4cb5ef0d5d95f27c2b1cdcb48e\"" Feb 13 19:28:49.137043 kubelet[2633]: E0213 19:28:49.137020 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:49.949269 kubelet[2633]: E0213 19:28:49.949227 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:49.970511 kubelet[2633]: E0213 19:28:49.970476 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.970511 kubelet[2633]: W0213 19:28:49.970501 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.970681 kubelet[2633]: E0213 19:28:49.970531 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.970818 kubelet[2633]: E0213 19:28:49.970805 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.970853 kubelet[2633]: W0213 19:28:49.970817 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.970853 kubelet[2633]: E0213 19:28:49.970828 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.971050 kubelet[2633]: E0213 19:28:49.971019 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.971050 kubelet[2633]: W0213 19:28:49.971031 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.971050 kubelet[2633]: E0213 19:28:49.971043 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.971330 kubelet[2633]: E0213 19:28:49.971318 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.971361 kubelet[2633]: W0213 19:28:49.971330 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.971361 kubelet[2633]: E0213 19:28:49.971342 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.971543 kubelet[2633]: E0213 19:28:49.971531 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.971543 kubelet[2633]: W0213 19:28:49.971541 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.971710 kubelet[2633]: E0213 19:28:49.971550 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.971785 kubelet[2633]: E0213 19:28:49.971756 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.971814 kubelet[2633]: W0213 19:28:49.971785 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.971814 kubelet[2633]: E0213 19:28:49.971795 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.972009 kubelet[2633]: E0213 19:28:49.971997 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.972009 kubelet[2633]: W0213 19:28:49.972008 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.972070 kubelet[2633]: E0213 19:28:49.972016 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.972226 kubelet[2633]: E0213 19:28:49.972213 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.972226 kubelet[2633]: W0213 19:28:49.972224 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.972276 kubelet[2633]: E0213 19:28:49.972233 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.972442 kubelet[2633]: E0213 19:28:49.972431 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.972469 kubelet[2633]: W0213 19:28:49.972441 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.972469 kubelet[2633]: E0213 19:28:49.972450 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.972660 kubelet[2633]: E0213 19:28:49.972649 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.972687 kubelet[2633]: W0213 19:28:49.972659 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.972687 kubelet[2633]: E0213 19:28:49.972668 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.972888 kubelet[2633]: E0213 19:28:49.972877 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.972918 kubelet[2633]: W0213 19:28:49.972888 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.972918 kubelet[2633]: E0213 19:28:49.972897 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.973102 kubelet[2633]: E0213 19:28:49.973089 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.973102 kubelet[2633]: W0213 19:28:49.973100 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.973153 kubelet[2633]: E0213 19:28:49.973109 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.973332 kubelet[2633]: E0213 19:28:49.973321 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.973353 kubelet[2633]: W0213 19:28:49.973331 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.973383 kubelet[2633]: E0213 19:28:49.973342 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.973564 kubelet[2633]: E0213 19:28:49.973553 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.973588 kubelet[2633]: W0213 19:28:49.973563 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.973588 kubelet[2633]: E0213 19:28:49.973572 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.973795 kubelet[2633]: E0213 19:28:49.973784 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.973823 kubelet[2633]: W0213 19:28:49.973794 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.973823 kubelet[2633]: E0213 19:28:49.973805 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.974019 kubelet[2633]: E0213 19:28:49.974008 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.974047 kubelet[2633]: W0213 19:28:49.974019 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.974047 kubelet[2633]: E0213 19:28:49.974027 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.974248 kubelet[2633]: E0213 19:28:49.974236 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.974272 kubelet[2633]: W0213 19:28:49.974247 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.974272 kubelet[2633]: E0213 19:28:49.974256 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.974458 kubelet[2633]: E0213 19:28:49.974446 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.974479 kubelet[2633]: W0213 19:28:49.974456 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.974479 kubelet[2633]: E0213 19:28:49.974465 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.974662 kubelet[2633]: E0213 19:28:49.974651 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.974686 kubelet[2633]: W0213 19:28:49.974662 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.974686 kubelet[2633]: E0213 19:28:49.974671 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.974885 kubelet[2633]: E0213 19:28:49.974874 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.974913 kubelet[2633]: W0213 19:28:49.974884 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.974913 kubelet[2633]: E0213 19:28:49.974893 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.975142 kubelet[2633]: E0213 19:28:49.975125 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.975142 kubelet[2633]: W0213 19:28:49.975135 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.975195 kubelet[2633]: E0213 19:28:49.975143 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.975350 kubelet[2633]: E0213 19:28:49.975339 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.975371 kubelet[2633]: W0213 19:28:49.975349 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.975371 kubelet[2633]: E0213 19:28:49.975356 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.975557 kubelet[2633]: E0213 19:28:49.975546 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.975582 kubelet[2633]: W0213 19:28:49.975557 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.975582 kubelet[2633]: E0213 19:28:49.975566 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.975792 kubelet[2633]: E0213 19:28:49.975780 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.975820 kubelet[2633]: W0213 19:28:49.975791 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.975820 kubelet[2633]: E0213 19:28:49.975801 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:49.976033 kubelet[2633]: E0213 19:28:49.976022 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:28:49.976072 kubelet[2633]: W0213 19:28:49.976033 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:28:49.976072 kubelet[2633]: E0213 19:28:49.976042 2633 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:28:50.603988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038144905.mount: Deactivated successfully. Feb 13 19:28:50.674376 containerd[1509]: time="2025-02-13T19:28:50.674340890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:50.675222 containerd[1509]: time="2025-02-13T19:28:50.675186119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:28:50.676247 containerd[1509]: time="2025-02-13T19:28:50.676217589Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:50.678081 containerd[1509]: time="2025-02-13T19:28:50.678040657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:50.678636 containerd[1509]: time="2025-02-13T19:28:50.678611026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.562596706s" Feb 13 19:28:50.678661 containerd[1509]: time="2025-02-13T19:28:50.678634300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:28:50.679561 containerd[1509]: time="2025-02-13T19:28:50.679462727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:28:50.680332 containerd[1509]: time="2025-02-13T19:28:50.680301563Z" level=info msg="CreateContainer within sandbox \"ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:28:50.694119 containerd[1509]: time="2025-02-13T19:28:50.694086182Z" level=info msg="CreateContainer within sandbox \"ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a\"" Feb 13 19:28:50.694436 containerd[1509]: time="2025-02-13T19:28:50.694401568Z" level=info msg="StartContainer for \"ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a\"" Feb 13 19:28:50.722905 systemd[1]: Started cri-containerd-ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a.scope - libcontainer container ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a. Feb 13 19:28:50.765707 systemd[1]: cri-containerd-ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a.scope: Deactivated successfully. Feb 13 19:28:50.782386 kubelet[2633]: E0213 19:28:50.782351 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:28:50.829193 containerd[1509]: time="2025-02-13T19:28:50.829126274Z" level=info msg="StartContainer for \"ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a\" returns successfully" Feb 13 19:28:50.850668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a-rootfs.mount: Deactivated successfully. Feb 13 19:28:50.852147 kubelet[2633]: E0213 19:28:50.852120 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:50.866692 containerd[1509]: time="2025-02-13T19:28:50.866536944Z" level=info msg="shim disconnected" id=ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a namespace=k8s.io Feb 13 19:28:50.866840 containerd[1509]: time="2025-02-13T19:28:50.866607848Z" level=warning msg="cleaning up after shim disconnected" id=ded7a030379fd8fb26e3441e2a5b179eef1624bd351e511ee25c28a9572d2c0a namespace=k8s.io Feb 13 19:28:50.866840 containerd[1509]: time="2025-02-13T19:28:50.866823585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:51.853441 kubelet[2633]: E0213 19:28:51.853404 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:52.584889 containerd[1509]: time="2025-02-13T19:28:52.584828469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:52.585558 containerd[1509]: time="2025-02-13T19:28:52.585495909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 19:28:52.586384 containerd[1509]: time="2025-02-13T19:28:52.586354843Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:52.588301 containerd[1509]: time="2025-02-13T19:28:52.588273888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:52.588851 containerd[1509]: time="2025-02-13T19:28:52.588809590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.909319101s" Feb 13 19:28:52.588851 containerd[1509]: time="2025-02-13T19:28:52.588848544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:28:52.592892 containerd[1509]: time="2025-02-13T19:28:52.592858329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:28:52.602045 containerd[1509]: time="2025-02-13T19:28:52.601991854Z" level=info msg="CreateContainer within sandbox \"7f0c1827305abc4d801ef48f424e6fa9cc495b4cb5ef0d5d95f27c2b1cdcb48e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:28:52.616635 containerd[1509]: time="2025-02-13T19:28:52.616584843Z" level=info msg="CreateContainer within sandbox \"7f0c1827305abc4d801ef48f424e6fa9cc495b4cb5ef0d5d95f27c2b1cdcb48e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e92c0eefb40af467fc94512a933c2c6dd64f52183dadcfc3865b861c3f74f489\"" Feb 13 19:28:52.617328 containerd[1509]: time="2025-02-13T19:28:52.617081602Z" level=info msg="StartContainer for \"e92c0eefb40af467fc94512a933c2c6dd64f52183dadcfc3865b861c3f74f489\"" Feb 13 19:28:52.645898 systemd[1]: Started cri-containerd-e92c0eefb40af467fc94512a933c2c6dd64f52183dadcfc3865b861c3f74f489.scope - libcontainer container e92c0eefb40af467fc94512a933c2c6dd64f52183dadcfc3865b861c3f74f489. Feb 13 19:28:52.685590 containerd[1509]: time="2025-02-13T19:28:52.685547538Z" level=info msg="StartContainer for \"e92c0eefb40af467fc94512a933c2c6dd64f52183dadcfc3865b861c3f74f489\" returns successfully" Feb 13 19:28:52.782695 kubelet[2633]: E0213 19:28:52.782638 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:28:52.857028 kubelet[2633]: E0213 19:28:52.856841 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:52.866075 kubelet[2633]: I0213 19:28:52.866007 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d87594d47-mrvrm" podStartSLOduration=1.410130651 podStartE2EDuration="4.865219104s" podCreationTimestamp="2025-02-13 19:28:48 +0000 UTC" firstStartedPulling="2025-02-13 19:28:49.137484477 +0000 UTC m=+15.424504410" lastFinishedPulling="2025-02-13 19:28:52.59257293 +0000 UTC m=+18.879592863" observedRunningTime="2025-02-13 19:28:52.864864785 +0000 UTC m=+19.151884718" watchObservedRunningTime="2025-02-13 19:28:52.865219104 +0000 UTC m=+19.152239037" Feb 13 19:28:53.859849 kubelet[2633]: I0213 19:28:53.859750 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:28:53.860352 kubelet[2633]: E0213 19:28:53.860133 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:54.783344 kubelet[2633]: E0213 19:28:54.783294 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:28:56.783000 kubelet[2633]: E0213 19:28:56.782926 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:28:58.055476 containerd[1509]: time="2025-02-13T19:28:58.055434380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:58.056546 containerd[1509]: time="2025-02-13T19:28:58.056492083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:28:58.057706 containerd[1509]: time="2025-02-13T19:28:58.057657059Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:58.059999 containerd[1509]: time="2025-02-13T19:28:58.059970589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:28:58.060792 containerd[1509]: time="2025-02-13T19:28:58.060771469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.467858576s" Feb 13 19:28:58.060848 containerd[1509]: time="2025-02-13T19:28:58.060795995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:28:58.064161 containerd[1509]: time="2025-02-13T19:28:58.064103128Z" level=info msg="CreateContainer within sandbox \"ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:28:58.080904 containerd[1509]: time="2025-02-13T19:28:58.080840661Z" level=info msg="CreateContainer within sandbox \"ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b\"" Feb 13 19:28:58.081702 containerd[1509]: time="2025-02-13T19:28:58.081663903Z" level=info msg="StartContainer for \"3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b\"" Feb 13 19:28:58.116988 systemd[1]: Started cri-containerd-3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b.scope - libcontainer container 3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b. Feb 13 19:28:58.266165 containerd[1509]: time="2025-02-13T19:28:58.266104057Z" level=info msg="StartContainer for \"3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b\" returns successfully" Feb 13 19:28:58.783494 kubelet[2633]: E0213 19:28:58.783434 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:28:58.869192 kubelet[2633]: E0213 19:28:58.869146 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:59.259563 containerd[1509]: time="2025-02-13T19:28:59.259427222Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:28:59.262805 systemd[1]: cri-containerd-3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b.scope: Deactivated successfully. Feb 13 19:28:59.263178 systemd[1]: cri-containerd-3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b.scope: Consumed 528ms CPU time, 159.2M memory peak, 4K read from disk, 151M written to disk. Feb 13 19:28:59.285067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b-rootfs.mount: Deactivated successfully. Feb 13 19:28:59.289298 containerd[1509]: time="2025-02-13T19:28:59.289226154Z" level=info msg="shim disconnected" id=3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b namespace=k8s.io Feb 13 19:28:59.289298 containerd[1509]: time="2025-02-13T19:28:59.289292819Z" level=warning msg="cleaning up after shim disconnected" id=3748514a59268e8e677f3c268e1c2ddab9c78e68e054fed53636697c6977522b namespace=k8s.io Feb 13 19:28:59.289298 containerd[1509]: time="2025-02-13T19:28:59.289303880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:28:59.362701 kubelet[2633]: I0213 19:28:59.362664 2633 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:28:59.391058 systemd[1]: Created slice kubepods-burstable-pod95422681_2fb6_4df9_b5da_8fadfe907d26.slice - libcontainer container kubepods-burstable-pod95422681_2fb6_4df9_b5da_8fadfe907d26.slice. Feb 13 19:28:59.402865 systemd[1]: Created slice kubepods-burstable-podbdf24e1d_a6d5_42ea_b368_ab29d0b4f983.slice - libcontainer container kubepods-burstable-podbdf24e1d_a6d5_42ea_b368_ab29d0b4f983.slice. Feb 13 19:28:59.410929 systemd[1]: Created slice kubepods-besteffort-pod5a1448e5_fc1e_42d8_9fd7_25807931cfd4.slice - libcontainer container kubepods-besteffort-pod5a1448e5_fc1e_42d8_9fd7_25807931cfd4.slice. Feb 13 19:28:59.416208 systemd[1]: Created slice kubepods-besteffort-podc94995b2_7cbe_4295_8d76_ae0d1a49f166.slice - libcontainer container kubepods-besteffort-podc94995b2_7cbe_4295_8d76_ae0d1a49f166.slice. Feb 13 19:28:59.421043 systemd[1]: Created slice kubepods-besteffort-pod57e05ce1_cab8_4450_a70c_4775184ae13e.slice - libcontainer container kubepods-besteffort-pod57e05ce1_cab8_4450_a70c_4775184ae13e.slice. Feb 13 19:28:59.448137 kubelet[2633]: I0213 19:28:59.448090 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a1448e5-fc1e-42d8-9fd7-25807931cfd4-tigera-ca-bundle\") pod \"calico-kube-controllers-5b67b658d9-8gkf8\" (UID: \"5a1448e5-fc1e-42d8-9fd7-25807931cfd4\") " pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:28:59.448137 kubelet[2633]: I0213 19:28:59.448141 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57e05ce1-cab8-4450-a70c-4775184ae13e-calico-apiserver-certs\") pod \"calico-apiserver-6c445f8fb-sdsg2\" (UID: \"57e05ce1-cab8-4450-a70c-4775184ae13e\") " pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:28:59.448299 kubelet[2633]: I0213 19:28:59.448159 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95422681-2fb6-4df9-b5da-8fadfe907d26-config-volume\") pod \"coredns-668d6bf9bc-t8v5n\" (UID: \"95422681-2fb6-4df9-b5da-8fadfe907d26\") " pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:28:59.448299 kubelet[2633]: I0213 19:28:59.448177 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6cqw\" (UniqueName: \"kubernetes.io/projected/57e05ce1-cab8-4450-a70c-4775184ae13e-kube-api-access-z6cqw\") pod \"calico-apiserver-6c445f8fb-sdsg2\" (UID: \"57e05ce1-cab8-4450-a70c-4775184ae13e\") " pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:28:59.448299 kubelet[2633]: I0213 19:28:59.448192 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfdp\" (UniqueName: \"kubernetes.io/projected/bdf24e1d-a6d5-42ea-b368-ab29d0b4f983-kube-api-access-6qfdp\") pod \"coredns-668d6bf9bc-lc7rd\" (UID: \"bdf24e1d-a6d5-42ea-b368-ab29d0b4f983\") " pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:28:59.448299 kubelet[2633]: I0213 19:28:59.448206 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c94995b2-7cbe-4295-8d76-ae0d1a49f166-calico-apiserver-certs\") pod \"calico-apiserver-6c445f8fb-l74z9\" (UID: \"c94995b2-7cbe-4295-8d76-ae0d1a49f166\") " pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:28:59.448299 kubelet[2633]: I0213 19:28:59.448265 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsvh5\" (UniqueName: \"kubernetes.io/projected/95422681-2fb6-4df9-b5da-8fadfe907d26-kube-api-access-hsvh5\") pod \"coredns-668d6bf9bc-t8v5n\" (UID: \"95422681-2fb6-4df9-b5da-8fadfe907d26\") " pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:28:59.448419 kubelet[2633]: I0213 19:28:59.448303 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkk7c\" (UniqueName: \"kubernetes.io/projected/5a1448e5-fc1e-42d8-9fd7-25807931cfd4-kube-api-access-lkk7c\") pod \"calico-kube-controllers-5b67b658d9-8gkf8\" (UID: \"5a1448e5-fc1e-42d8-9fd7-25807931cfd4\") " pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:28:59.448419 kubelet[2633]: I0213 19:28:59.448318 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf8zc\" (UniqueName: \"kubernetes.io/projected/c94995b2-7cbe-4295-8d76-ae0d1a49f166-kube-api-access-tf8zc\") pod \"calico-apiserver-6c445f8fb-l74z9\" (UID: \"c94995b2-7cbe-4295-8d76-ae0d1a49f166\") " pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:28:59.448419 kubelet[2633]: I0213 19:28:59.448334 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdf24e1d-a6d5-42ea-b368-ab29d0b4f983-config-volume\") pod \"coredns-668d6bf9bc-lc7rd\" (UID: \"bdf24e1d-a6d5-42ea-b368-ab29d0b4f983\") " pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:28:59.695535 kubelet[2633]: E0213 19:28:59.695485 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:59.696248 containerd[1509]: time="2025-02-13T19:28:59.696145616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:0,}" Feb 13 19:28:59.708532 kubelet[2633]: E0213 19:28:59.708086 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:59.708651 containerd[1509]: time="2025-02-13T19:28:59.708607856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:0,}" Feb 13 19:28:59.715030 containerd[1509]: time="2025-02-13T19:28:59.714984871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:0,}" Feb 13 19:28:59.719200 containerd[1509]: time="2025-02-13T19:28:59.719172801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:28:59.724972 containerd[1509]: time="2025-02-13T19:28:59.724938934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:28:59.796109 containerd[1509]: time="2025-02-13T19:28:59.796032988Z" level=error msg="Failed to destroy network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.796584 containerd[1509]: time="2025-02-13T19:28:59.796466635Z" level=error msg="encountered an error cleaning up failed sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.796584 containerd[1509]: time="2025-02-13T19:28:59.796528172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.796776 kubelet[2633]: E0213 19:28:59.796720 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.797098 kubelet[2633]: E0213 19:28:59.796869 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:28:59.797098 kubelet[2633]: E0213 19:28:59.796904 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:28:59.797098 kubelet[2633]: E0213 19:28:59.796954 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t8v5n" podUID="95422681-2fb6-4df9-b5da-8fadfe907d26" Feb 13 19:28:59.810829 containerd[1509]: time="2025-02-13T19:28:59.810784361Z" level=error msg="Failed to destroy network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.811300 containerd[1509]: time="2025-02-13T19:28:59.811207548Z" level=error msg="encountered an error cleaning up failed sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.811300 containerd[1509]: time="2025-02-13T19:28:59.811286708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.811597 kubelet[2633]: E0213 19:28:59.811527 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.811660 kubelet[2633]: E0213 19:28:59.811624 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:28:59.811690 kubelet[2633]: E0213 19:28:59.811658 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:28:59.811739 kubelet[2633]: E0213 19:28:59.811703 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lc7rd" podUID="bdf24e1d-a6d5-42ea-b368-ab29d0b4f983" Feb 13 19:28:59.821331 containerd[1509]: time="2025-02-13T19:28:59.821268323Z" level=error msg="Failed to destroy network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.822139 containerd[1509]: time="2025-02-13T19:28:59.821952131Z" level=error msg="encountered an error cleaning up failed sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.822139 containerd[1509]: time="2025-02-13T19:28:59.822027253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.822367 kubelet[2633]: E0213 19:28:59.822316 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.822427 kubelet[2633]: E0213 19:28:59.822394 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:28:59.822427 kubelet[2633]: E0213 19:28:59.822414 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:28:59.823037 kubelet[2633]: E0213 19:28:59.822473 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" podUID="c94995b2-7cbe-4295-8d76-ae0d1a49f166" Feb 13 19:28:59.831162 containerd[1509]: time="2025-02-13T19:28:59.831101399Z" level=error msg="Failed to destroy network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.831892 containerd[1509]: time="2025-02-13T19:28:59.831864216Z" level=error msg="encountered an error cleaning up failed sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.832079 containerd[1509]: time="2025-02-13T19:28:59.832035418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.832421 kubelet[2633]: E0213 19:28:59.832389 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.832489 kubelet[2633]: E0213 19:28:59.832436 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:28:59.832489 kubelet[2633]: E0213 19:28:59.832457 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:28:59.832559 kubelet[2633]: E0213 19:28:59.832499 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" podUID="5a1448e5-fc1e-42d8-9fd7-25807931cfd4" Feb 13 19:28:59.836236 containerd[1509]: time="2025-02-13T19:28:59.836199053Z" level=error msg="Failed to destroy network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.836617 containerd[1509]: time="2025-02-13T19:28:59.836587906Z" level=error msg="encountered an error cleaning up failed sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.836658 containerd[1509]: time="2025-02-13T19:28:59.836644502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.836996 kubelet[2633]: E0213 19:28:59.836950 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.837045 kubelet[2633]: E0213 19:28:59.837019 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:28:59.837071 kubelet[2633]: E0213 19:28:59.837041 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:28:59.837132 kubelet[2633]: E0213 19:28:59.837105 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" podUID="57e05ce1-cab8-4450-a70c-4775184ae13e" Feb 13 19:28:59.872274 kubelet[2633]: E0213 19:28:59.872242 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:59.872948 containerd[1509]: time="2025-02-13T19:28:59.872891634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:28:59.873258 kubelet[2633]: I0213 19:28:59.873219 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f" Feb 13 19:28:59.873783 containerd[1509]: time="2025-02-13T19:28:59.873734892Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" Feb 13 19:28:59.874017 containerd[1509]: time="2025-02-13T19:28:59.873957642Z" level=info msg="Ensure that sandbox a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f in task-service has been cleanup successfully" Feb 13 19:28:59.874245 containerd[1509]: time="2025-02-13T19:28:59.874193586Z" level=info msg="TearDown network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" successfully" Feb 13 19:28:59.874296 containerd[1509]: time="2025-02-13T19:28:59.874244112Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" returns successfully" Feb 13 19:28:59.874743 containerd[1509]: time="2025-02-13T19:28:59.874705822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:28:59.875848 kubelet[2633]: I0213 19:28:59.875694 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0" Feb 13 19:28:59.876142 containerd[1509]: time="2025-02-13T19:28:59.876038522Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" Feb 13 19:28:59.876255 containerd[1509]: time="2025-02-13T19:28:59.876231506Z" level=info msg="Ensure that sandbox 10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0 in task-service has been cleanup successfully" Feb 13 19:28:59.876747 containerd[1509]: time="2025-02-13T19:28:59.876407688Z" level=info msg="TearDown network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" successfully" Feb 13 19:28:59.876747 containerd[1509]: time="2025-02-13T19:28:59.876423498Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" returns successfully" Feb 13 19:28:59.876860 containerd[1509]: time="2025-02-13T19:28:59.876752237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:1,}" Feb 13 19:28:59.877572 kubelet[2633]: I0213 19:28:59.877551 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754" Feb 13 19:28:59.878100 containerd[1509]: time="2025-02-13T19:28:59.877882707Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" Feb 13 19:28:59.878100 containerd[1509]: time="2025-02-13T19:28:59.878035665Z" level=info msg="Ensure that sandbox d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754 in task-service has been cleanup successfully" Feb 13 19:28:59.878299 containerd[1509]: time="2025-02-13T19:28:59.878188464Z" level=info msg="TearDown network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" successfully" Feb 13 19:28:59.878299 containerd[1509]: time="2025-02-13T19:28:59.878202780Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" returns successfully" Feb 13 19:28:59.878728 containerd[1509]: time="2025-02-13T19:28:59.878699526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:28:59.879306 kubelet[2633]: I0213 19:28:59.879285 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80" Feb 13 19:28:59.880023 containerd[1509]: time="2025-02-13T19:28:59.879715269Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" Feb 13 19:28:59.880023 containerd[1509]: time="2025-02-13T19:28:59.879918142Z" level=info msg="Ensure that sandbox 3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80 in task-service has been cleanup successfully" Feb 13 19:28:59.880495 kubelet[2633]: I0213 19:28:59.880461 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289" Feb 13 19:28:59.880965 containerd[1509]: time="2025-02-13T19:28:59.880942371Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" Feb 13 19:28:59.881136 containerd[1509]: time="2025-02-13T19:28:59.881085541Z" level=info msg="Ensure that sandbox 867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289 in task-service has been cleanup successfully" Feb 13 19:28:59.882106 containerd[1509]: time="2025-02-13T19:28:59.881913441Z" level=info msg="TearDown network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" successfully" Feb 13 19:28:59.882106 containerd[1509]: time="2025-02-13T19:28:59.881939069Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" returns successfully" Feb 13 19:28:59.882106 containerd[1509]: time="2025-02-13T19:28:59.881919943Z" level=info msg="TearDown network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" successfully" Feb 13 19:28:59.882106 containerd[1509]: time="2025-02-13T19:28:59.882014591Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" returns successfully" Feb 13 19:28:59.882253 kubelet[2633]: E0213 19:28:59.882195 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:59.882280 kubelet[2633]: E0213 19:28:59.882197 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:28:59.882571 containerd[1509]: time="2025-02-13T19:28:59.882510937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:1,}" Feb 13 19:28:59.882655 containerd[1509]: time="2025-02-13T19:28:59.882513572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:1,}" Feb 13 19:28:59.983775 containerd[1509]: time="2025-02-13T19:28:59.983645268Z" level=error msg="Failed to destroy network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.985235 containerd[1509]: time="2025-02-13T19:28:59.983900078Z" level=error msg="Failed to destroy network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.985313 containerd[1509]: time="2025-02-13T19:28:59.985274367Z" level=error msg="encountered an error cleaning up failed sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.985337 containerd[1509]: time="2025-02-13T19:28:59.985325744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.985649 kubelet[2633]: E0213 19:28:59.985612 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.985908 kubelet[2633]: E0213 19:28:59.985818 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:28:59.985908 kubelet[2633]: E0213 19:28:59.985868 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:28:59.986179 kubelet[2633]: E0213 19:28:59.986061 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" podUID="c94995b2-7cbe-4295-8d76-ae0d1a49f166" Feb 13 19:28:59.988066 containerd[1509]: time="2025-02-13T19:28:59.987303981Z" level=error msg="encountered an error cleaning up failed sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.988066 containerd[1509]: time="2025-02-13T19:28:59.987349326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.988174 kubelet[2633]: E0213 19:28:59.987499 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:28:59.988174 kubelet[2633]: E0213 19:28:59.987562 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:28:59.988174 kubelet[2633]: E0213 19:28:59.987584 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:28:59.988263 kubelet[2633]: E0213 19:28:59.987625 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" podUID="5a1448e5-fc1e-42d8-9fd7-25807931cfd4" Feb 13 19:29:00.009580 containerd[1509]: time="2025-02-13T19:29:00.009496429Z" level=error msg="Failed to destroy network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.009966 containerd[1509]: time="2025-02-13T19:29:00.009934243Z" level=error msg="encountered an error cleaning up failed sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.010254 containerd[1509]: time="2025-02-13T19:29:00.010138808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.010438 kubelet[2633]: E0213 19:29:00.010396 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.010482 kubelet[2633]: E0213 19:29:00.010457 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:29:00.010506 kubelet[2633]: E0213 19:29:00.010476 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:29:00.010561 kubelet[2633]: E0213 19:29:00.010534 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" podUID="57e05ce1-cab8-4450-a70c-4775184ae13e" Feb 13 19:29:00.012080 containerd[1509]: time="2025-02-13T19:29:00.012018499Z" level=error msg="Failed to destroy network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.012488 containerd[1509]: time="2025-02-13T19:29:00.012456874Z" level=error msg="encountered an error cleaning up failed sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.012685 containerd[1509]: time="2025-02-13T19:29:00.012658033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.012867 kubelet[2633]: E0213 19:29:00.012844 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.012912 kubelet[2633]: E0213 19:29:00.012874 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:29:00.012912 kubelet[2633]: E0213 19:29:00.012890 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:29:00.012989 kubelet[2633]: E0213 19:29:00.012928 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t8v5n" podUID="95422681-2fb6-4df9-b5da-8fadfe907d26" Feb 13 19:29:00.017101 containerd[1509]: time="2025-02-13T19:29:00.017063250Z" level=error msg="Failed to destroy network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.017449 containerd[1509]: time="2025-02-13T19:29:00.017416516Z" level=error msg="encountered an error cleaning up failed sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.017512 containerd[1509]: time="2025-02-13T19:29:00.017474955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.017715 kubelet[2633]: E0213 19:29:00.017674 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.017786 kubelet[2633]: E0213 19:29:00.017728 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:29:00.017786 kubelet[2633]: E0213 19:29:00.017771 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:29:00.017850 kubelet[2633]: E0213 19:29:00.017811 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lc7rd" podUID="bdf24e1d-a6d5-42ea-b368-ab29d0b4f983" Feb 13 19:29:00.286491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289-shm.mount: Deactivated successfully. Feb 13 19:29:00.789509 systemd[1]: Created slice kubepods-besteffort-pod5eaecbe6_c19b_4299_995d_b27991011c1a.slice - libcontainer container kubepods-besteffort-pod5eaecbe6_c19b_4299_995d_b27991011c1a.slice. Feb 13 19:29:00.791928 containerd[1509]: time="2025-02-13T19:29:00.791873015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:0,}" Feb 13 19:29:00.859687 containerd[1509]: time="2025-02-13T19:29:00.859624994Z" level=error msg="Failed to destroy network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.860184 containerd[1509]: time="2025-02-13T19:29:00.860138081Z" level=error msg="encountered an error cleaning up failed sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.860233 containerd[1509]: time="2025-02-13T19:29:00.860209956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.860573 kubelet[2633]: E0213 19:29:00.860508 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:00.861102 kubelet[2633]: E0213 19:29:00.860592 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:29:00.861102 kubelet[2633]: E0213 19:29:00.860621 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:29:00.861102 kubelet[2633]: E0213 19:29:00.860677 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6nvb_calico-system(5eaecbe6-c19b-4299-995d-b27991011c1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6nvb_calico-system(5eaecbe6-c19b-4299-995d-b27991011c1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:29:00.862910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe-shm.mount: Deactivated successfully. Feb 13 19:29:00.884179 kubelet[2633]: I0213 19:29:00.884146 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c" Feb 13 19:29:00.884753 containerd[1509]: time="2025-02-13T19:29:00.884701926Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\"" Feb 13 19:29:00.884964 containerd[1509]: time="2025-02-13T19:29:00.884936428Z" level=info msg="Ensure that sandbox aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c in task-service has been cleanup successfully" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.885162684Z" level=info msg="TearDown network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" successfully" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.885177892Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" returns successfully" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.886220817Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\"" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.886409192Z" level=info msg="Ensure that sandbox 720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba in task-service has been cleanup successfully" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.886601835Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.886671436Z" level=info msg="TearDown network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" successfully" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.886681464Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" returns successfully" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.887176306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:2,}" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.887183630Z" level=info msg="TearDown network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" successfully" Feb 13 19:29:00.887288 containerd[1509]: time="2025-02-13T19:29:00.887218667Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" returns successfully" Feb 13 19:29:00.887643 kubelet[2633]: I0213 19:29:00.885807 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba" Feb 13 19:29:00.887643 kubelet[2633]: E0213 19:29:00.886938 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:00.887720 containerd[1509]: time="2025-02-13T19:29:00.887646812Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" Feb 13 19:29:00.887748 containerd[1509]: time="2025-02-13T19:29:00.887726463Z" level=info msg="TearDown network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" successfully" Feb 13 19:29:00.887748 containerd[1509]: time="2025-02-13T19:29:00.887738385Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" returns successfully" Feb 13 19:29:00.888213 containerd[1509]: time="2025-02-13T19:29:00.888152225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:29:00.888532 kubelet[2633]: I0213 19:29:00.888492 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a" Feb 13 19:29:00.889187 systemd[1]: run-netns-cni\x2d94da6426\x2d984a\x2d4570\x2d82a3\x2dfc2cf739f7a0.mount: Deactivated successfully. Feb 13 19:29:00.889332 containerd[1509]: time="2025-02-13T19:29:00.889185200Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\"" Feb 13 19:29:00.889332 containerd[1509]: time="2025-02-13T19:29:00.889319484Z" level=info msg="Ensure that sandbox 5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a in task-service has been cleanup successfully" Feb 13 19:29:00.889638 containerd[1509]: time="2025-02-13T19:29:00.889512136Z" level=info msg="TearDown network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" successfully" Feb 13 19:29:00.889638 containerd[1509]: time="2025-02-13T19:29:00.889562962Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" returns successfully" Feb 13 19:29:00.890028 containerd[1509]: time="2025-02-13T19:29:00.889976821Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" Feb 13 19:29:00.890202 containerd[1509]: time="2025-02-13T19:29:00.890105824Z" level=info msg="TearDown network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" successfully" Feb 13 19:29:00.890202 containerd[1509]: time="2025-02-13T19:29:00.890127765Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" returns successfully" Feb 13 19:29:00.890314 kubelet[2633]: E0213 19:29:00.890293 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:00.890650 containerd[1509]: time="2025-02-13T19:29:00.890616587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:2,}" Feb 13 19:29:00.891479 containerd[1509]: time="2025-02-13T19:29:00.891452752Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\"" Feb 13 19:29:00.891689 containerd[1509]: time="2025-02-13T19:29:00.891667246Z" level=info msg="Ensure that sandbox 7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe in task-service has been cleanup successfully" Feb 13 19:29:00.891839 kubelet[2633]: I0213 19:29:00.891783 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe" Feb 13 19:29:00.891887 containerd[1509]: time="2025-02-13T19:29:00.891864036Z" level=info msg="TearDown network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" successfully" Feb 13 19:29:00.891918 containerd[1509]: time="2025-02-13T19:29:00.891888732Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" returns successfully" Feb 13 19:29:00.892608 containerd[1509]: time="2025-02-13T19:29:00.892399815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:1,}" Feb 13 19:29:00.892947 kubelet[2633]: I0213 19:29:00.892915 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08" Feb 13 19:29:00.893683 containerd[1509]: time="2025-02-13T19:29:00.893310621Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\"" Feb 13 19:29:00.893683 containerd[1509]: time="2025-02-13T19:29:00.893512501Z" level=info msg="Ensure that sandbox 851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08 in task-service has been cleanup successfully" Feb 13 19:29:00.893851 containerd[1509]: time="2025-02-13T19:29:00.893830911Z" level=info msg="TearDown network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" successfully" Feb 13 19:29:00.893937 containerd[1509]: time="2025-02-13T19:29:00.893919167Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" returns successfully" Feb 13 19:29:00.894372 containerd[1509]: time="2025-02-13T19:29:00.894346922Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" Feb 13 19:29:00.895376 containerd[1509]: time="2025-02-13T19:29:00.894509479Z" level=info msg="TearDown network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" successfully" Feb 13 19:29:00.895376 containerd[1509]: time="2025-02-13T19:29:00.895066978Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" returns successfully" Feb 13 19:29:00.895376 containerd[1509]: time="2025-02-13T19:29:00.894962222Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\"" Feb 13 19:29:00.895376 containerd[1509]: time="2025-02-13T19:29:00.895274680Z" level=info msg="Ensure that sandbox 796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc in task-service has been cleanup successfully" Feb 13 19:29:00.895489 kubelet[2633]: I0213 19:29:00.894533 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc" Feb 13 19:29:00.895688 containerd[1509]: time="2025-02-13T19:29:00.895609120Z" level=info msg="TearDown network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" successfully" Feb 13 19:29:00.895688 containerd[1509]: time="2025-02-13T19:29:00.895626973Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" returns successfully" Feb 13 19:29:00.895688 containerd[1509]: time="2025-02-13T19:29:00.895673952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:29:00.896060 containerd[1509]: time="2025-02-13T19:29:00.896009063Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" Feb 13 19:29:00.896431 containerd[1509]: time="2025-02-13T19:29:00.896387917Z" level=info msg="TearDown network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" successfully" Feb 13 19:29:00.896431 containerd[1509]: time="2025-02-13T19:29:00.896405279Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" returns successfully" Feb 13 19:29:00.896846 containerd[1509]: time="2025-02-13T19:29:00.896802988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:2,}" Feb 13 19:29:01.005633 containerd[1509]: time="2025-02-13T19:29:01.005576419Z" level=error msg="Failed to destroy network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.006196 containerd[1509]: time="2025-02-13T19:29:01.006174144Z" level=error msg="encountered an error cleaning up failed sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.006353 containerd[1509]: time="2025-02-13T19:29:01.006323185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.006836 kubelet[2633]: E0213 19:29:01.006778 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.007255 kubelet[2633]: E0213 19:29:01.006936 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:29:01.007255 kubelet[2633]: E0213 19:29:01.006961 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:29:01.007255 kubelet[2633]: E0213 19:29:01.006998 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t8v5n" podUID="95422681-2fb6-4df9-b5da-8fadfe907d26" Feb 13 19:29:01.034489 containerd[1509]: time="2025-02-13T19:29:01.034440729Z" level=error msg="Failed to destroy network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.035164 containerd[1509]: time="2025-02-13T19:29:01.035063752Z" level=error msg="encountered an error cleaning up failed sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.035164 containerd[1509]: time="2025-02-13T19:29:01.035123664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.035567 kubelet[2633]: E0213 19:29:01.035517 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.035618 kubelet[2633]: E0213 19:29:01.035588 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:29:01.035618 kubelet[2633]: E0213 19:29:01.035609 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:29:01.035688 kubelet[2633]: E0213 19:29:01.035645 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" podUID="5a1448e5-fc1e-42d8-9fd7-25807931cfd4" Feb 13 19:29:01.040585 containerd[1509]: time="2025-02-13T19:29:01.040422852Z" level=error msg="Failed to destroy network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.042914 containerd[1509]: time="2025-02-13T19:29:01.042819314Z" level=error msg="encountered an error cleaning up failed sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.043178 containerd[1509]: time="2025-02-13T19:29:01.043122466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.043906 kubelet[2633]: E0213 19:29:01.043850 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.043963 kubelet[2633]: E0213 19:29:01.043926 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:29:01.043963 kubelet[2633]: E0213 19:29:01.043947 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:29:01.044008 kubelet[2633]: E0213 19:29:01.043984 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6nvb_calico-system(5eaecbe6-c19b-4299-995d-b27991011c1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6nvb_calico-system(5eaecbe6-c19b-4299-995d-b27991011c1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:29:01.046066 containerd[1509]: time="2025-02-13T19:29:01.045968023Z" level=error msg="Failed to destroy network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.046734 containerd[1509]: time="2025-02-13T19:29:01.046601616Z" level=error msg="encountered an error cleaning up failed sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.046734 containerd[1509]: time="2025-02-13T19:29:01.046657272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.047356 kubelet[2633]: E0213 19:29:01.047295 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.047410 kubelet[2633]: E0213 19:29:01.047383 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:29:01.047410 kubelet[2633]: E0213 19:29:01.047406 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:29:01.047471 kubelet[2633]: E0213 19:29:01.047448 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lc7rd" podUID="bdf24e1d-a6d5-42ea-b368-ab29d0b4f983" Feb 13 19:29:01.051601 containerd[1509]: time="2025-02-13T19:29:01.051549073Z" level=error msg="Failed to destroy network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.051971 containerd[1509]: time="2025-02-13T19:29:01.051948925Z" level=error msg="encountered an error cleaning up failed sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.052021 containerd[1509]: time="2025-02-13T19:29:01.051993569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.052227 kubelet[2633]: E0213 19:29:01.052190 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.052282 kubelet[2633]: E0213 19:29:01.052247 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:29:01.052282 kubelet[2633]: E0213 19:29:01.052266 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:29:01.052333 kubelet[2633]: E0213 19:29:01.052307 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" podUID="57e05ce1-cab8-4450-a70c-4775184ae13e" Feb 13 19:29:01.052908 containerd[1509]: time="2025-02-13T19:29:01.052866954Z" level=error msg="Failed to destroy network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.053480 containerd[1509]: time="2025-02-13T19:29:01.053159315Z" level=error msg="encountered an error cleaning up failed sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.053480 containerd[1509]: time="2025-02-13T19:29:01.053197917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.053639 kubelet[2633]: E0213 19:29:01.053342 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:01.053639 kubelet[2633]: E0213 19:29:01.053393 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:29:01.053639 kubelet[2633]: E0213 19:29:01.053413 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:29:01.053737 kubelet[2633]: E0213 19:29:01.053452 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" podUID="c94995b2-7cbe-4295-8d76-ae0d1a49f166" Feb 13 19:29:01.286776 systemd[1]: run-netns-cni\x2dea9cdab4\x2dafcd\x2d40a7\x2dad7b\x2d20bc37212477.mount: Deactivated successfully. Feb 13 19:29:01.286891 systemd[1]: run-netns-cni\x2d92491743\x2df64d\x2d3d6b\x2d795e\x2de1babfa4b7c7.mount: Deactivated successfully. Feb 13 19:29:01.286967 systemd[1]: run-netns-cni\x2d604b62c8\x2d411f\x2d759f\x2d44d6\x2d82ca03f5d0dc.mount: Deactivated successfully. Feb 13 19:29:01.287042 systemd[1]: run-netns-cni\x2d2472ea4a\x2d6d12\x2d67f7\x2dc73a\x2d4b69505c4912.mount: Deactivated successfully. Feb 13 19:29:01.287112 systemd[1]: run-netns-cni\x2d9f782ba9\x2d7aa4\x2df3e3\x2dd288\x2dd34eadf2a64a.mount: Deactivated successfully. Feb 13 19:29:01.897674 kubelet[2633]: I0213 19:29:01.897625 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf" Feb 13 19:29:01.902618 containerd[1509]: time="2025-02-13T19:29:01.898403394Z" level=info msg="StopPodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\"" Feb 13 19:29:01.902618 containerd[1509]: time="2025-02-13T19:29:01.898625922Z" level=info msg="Ensure that sandbox 4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf in task-service has been cleanup successfully" Feb 13 19:29:01.901449 systemd[1]: run-netns-cni\x2d26e49694\x2db065\x2d7e12\x2d92f8\x2d95ee96a46ab4.mount: Deactivated successfully. Feb 13 19:29:01.903444 containerd[1509]: time="2025-02-13T19:29:01.903366528Z" level=info msg="TearDown network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" successfully" Feb 13 19:29:01.903444 containerd[1509]: time="2025-02-13T19:29:01.903387959Z" level=info msg="StopPodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" returns successfully" Feb 13 19:29:01.903833 containerd[1509]: time="2025-02-13T19:29:01.903751854Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\"" Feb 13 19:29:01.904306 containerd[1509]: time="2025-02-13T19:29:01.904199938Z" level=info msg="TearDown network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" successfully" Feb 13 19:29:01.904306 containerd[1509]: time="2025-02-13T19:29:01.904216719Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" returns successfully" Feb 13 19:29:01.904535 containerd[1509]: time="2025-02-13T19:29:01.904513318Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" Feb 13 19:29:01.904727 containerd[1509]: time="2025-02-13T19:29:01.904688999Z" level=info msg="TearDown network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" successfully" Feb 13 19:29:01.904727 containerd[1509]: time="2025-02-13T19:29:01.904712433Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" returns successfully" Feb 13 19:29:01.906232 kubelet[2633]: I0213 19:29:01.905049 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3" Feb 13 19:29:01.906305 containerd[1509]: time="2025-02-13T19:29:01.905687239Z" level=info msg="StopPodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\"" Feb 13 19:29:01.906305 containerd[1509]: time="2025-02-13T19:29:01.905720571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:29:01.907398 containerd[1509]: time="2025-02-13T19:29:01.907369997Z" level=info msg="Ensure that sandbox e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3 in task-service has been cleanup successfully" Feb 13 19:29:01.910126 systemd[1]: run-netns-cni\x2d0445eda6\x2d8ed7\x2dc56a\x2df9ae\x2dbf4924681d8a.mount: Deactivated successfully. Feb 13 19:29:01.910486 containerd[1509]: time="2025-02-13T19:29:01.910444235Z" level=info msg="TearDown network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" successfully" Feb 13 19:29:01.910486 containerd[1509]: time="2025-02-13T19:29:01.910464203Z" level=info msg="StopPodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" returns successfully" Feb 13 19:29:01.911180 containerd[1509]: time="2025-02-13T19:29:01.911146538Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\"" Feb 13 19:29:01.911257 containerd[1509]: time="2025-02-13T19:29:01.911237559Z" level=info msg="TearDown network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" successfully" Feb 13 19:29:01.911326 containerd[1509]: time="2025-02-13T19:29:01.911254832Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" returns successfully" Feb 13 19:29:01.911903 kubelet[2633]: I0213 19:29:01.911505 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e" Feb 13 19:29:01.912041 containerd[1509]: time="2025-02-13T19:29:01.911984827Z" level=info msg="StopPodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\"" Feb 13 19:29:01.912179 containerd[1509]: time="2025-02-13T19:29:01.912145248Z" level=info msg="Ensure that sandbox 374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e in task-service has been cleanup successfully" Feb 13 19:29:01.912421 containerd[1509]: time="2025-02-13T19:29:01.912308636Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" Feb 13 19:29:01.912421 containerd[1509]: time="2025-02-13T19:29:01.912391352Z" level=info msg="TearDown network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" successfully" Feb 13 19:29:01.912421 containerd[1509]: time="2025-02-13T19:29:01.912403765Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" returns successfully" Feb 13 19:29:01.912561 containerd[1509]: time="2025-02-13T19:29:01.912540773Z" level=info msg="TearDown network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" successfully" Feb 13 19:29:01.912885 kubelet[2633]: E0213 19:29:01.912710 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:01.913016 containerd[1509]: time="2025-02-13T19:29:01.912957057Z" level=info msg="StopPodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" returns successfully" Feb 13 19:29:01.913756 containerd[1509]: time="2025-02-13T19:29:01.913316994Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\"" Feb 13 19:29:01.913756 containerd[1509]: time="2025-02-13T19:29:01.913405421Z" level=info msg="TearDown network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" successfully" Feb 13 19:29:01.913756 containerd[1509]: time="2025-02-13T19:29:01.913418295Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" returns successfully" Feb 13 19:29:01.913756 containerd[1509]: time="2025-02-13T19:29:01.913499308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:3,}" Feb 13 19:29:01.914775 containerd[1509]: time="2025-02-13T19:29:01.914745244Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" Feb 13 19:29:01.915000 kubelet[2633]: I0213 19:29:01.914986 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d" Feb 13 19:29:01.915236 containerd[1509]: time="2025-02-13T19:29:01.915134267Z" level=info msg="TearDown network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" successfully" Feb 13 19:29:01.915236 containerd[1509]: time="2025-02-13T19:29:01.915148864Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" returns successfully" Feb 13 19:29:01.915616 systemd[1]: run-netns-cni\x2d6f7f783c\x2d4854\x2dfaaf\x2d4919\x2ddfd7e9f946a9.mount: Deactivated successfully. Feb 13 19:29:01.915915 containerd[1509]: time="2025-02-13T19:29:01.915833433Z" level=info msg="StopPodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\"" Feb 13 19:29:01.916035 kubelet[2633]: E0213 19:29:01.916011 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:01.916102 containerd[1509]: time="2025-02-13T19:29:01.916050932Z" level=info msg="Ensure that sandbox f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d in task-service has been cleanup successfully" Feb 13 19:29:01.917049 containerd[1509]: time="2025-02-13T19:29:01.916169285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:3,}" Feb 13 19:29:01.917049 containerd[1509]: time="2025-02-13T19:29:01.916410910Z" level=info msg="TearDown network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" successfully" Feb 13 19:29:01.917049 containerd[1509]: time="2025-02-13T19:29:01.916593504Z" level=info msg="StopPodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" returns successfully" Feb 13 19:29:01.917797 containerd[1509]: time="2025-02-13T19:29:01.917743099Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\"" Feb 13 19:29:01.918169 containerd[1509]: time="2025-02-13T19:29:01.918153081Z" level=info msg="TearDown network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" successfully" Feb 13 19:29:01.918428 containerd[1509]: time="2025-02-13T19:29:01.918412600Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" returns successfully" Feb 13 19:29:01.918805 kubelet[2633]: I0213 19:29:01.918554 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75" Feb 13 19:29:01.920892 containerd[1509]: time="2025-02-13T19:29:01.919844034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:2,}" Feb 13 19:29:01.920892 containerd[1509]: time="2025-02-13T19:29:01.920275477Z" level=info msg="StopPodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\"" Feb 13 19:29:01.919918 systemd[1]: run-netns-cni\x2deb0c60ee\x2d474f\x2d75c5\x2d8ae5\x2d519af155111c.mount: Deactivated successfully. Feb 13 19:29:01.921121 containerd[1509]: time="2025-02-13T19:29:01.921092776Z" level=info msg="Ensure that sandbox e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75 in task-service has been cleanup successfully" Feb 13 19:29:01.923560 containerd[1509]: time="2025-02-13T19:29:01.921643954Z" level=info msg="TearDown network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" successfully" Feb 13 19:29:01.923654 containerd[1509]: time="2025-02-13T19:29:01.923557527Z" level=info msg="StopPodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" returns successfully" Feb 13 19:29:01.924247 containerd[1509]: time="2025-02-13T19:29:01.924208893Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\"" Feb 13 19:29:01.924343 containerd[1509]: time="2025-02-13T19:29:01.924310695Z" level=info msg="TearDown network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" successfully" Feb 13 19:29:01.924343 containerd[1509]: time="2025-02-13T19:29:01.924331134Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" returns successfully" Feb 13 19:29:01.925384 containerd[1509]: time="2025-02-13T19:29:01.925347658Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" Feb 13 19:29:01.925720 containerd[1509]: time="2025-02-13T19:29:01.925690313Z" level=info msg="TearDown network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" successfully" Feb 13 19:29:01.925720 containerd[1509]: time="2025-02-13T19:29:01.925711753Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" returns successfully" Feb 13 19:29:01.926266 kubelet[2633]: I0213 19:29:01.926232 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627" Feb 13 19:29:01.926727 containerd[1509]: time="2025-02-13T19:29:01.926676340Z" level=info msg="StopPodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\"" Feb 13 19:29:01.926919 containerd[1509]: time="2025-02-13T19:29:01.926886375Z" level=info msg="Ensure that sandbox cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627 in task-service has been cleanup successfully" Feb 13 19:29:01.927295 containerd[1509]: time="2025-02-13T19:29:01.927266300Z" level=info msg="TearDown network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" successfully" Feb 13 19:29:01.927295 containerd[1509]: time="2025-02-13T19:29:01.927288543Z" level=info msg="StopPodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" returns successfully" Feb 13 19:29:01.927449 containerd[1509]: time="2025-02-13T19:29:01.927417846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:29:01.927695 containerd[1509]: time="2025-02-13T19:29:01.927670392Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\"" Feb 13 19:29:01.927849 containerd[1509]: time="2025-02-13T19:29:01.927827197Z" level=info msg="TearDown network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" successfully" Feb 13 19:29:01.927895 containerd[1509]: time="2025-02-13T19:29:01.927848146Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" returns successfully" Feb 13 19:29:01.928530 containerd[1509]: time="2025-02-13T19:29:01.928495625Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" Feb 13 19:29:01.928608 containerd[1509]: time="2025-02-13T19:29:01.928588230Z" level=info msg="TearDown network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" successfully" Feb 13 19:29:01.928638 containerd[1509]: time="2025-02-13T19:29:01.928608067Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" returns successfully" Feb 13 19:29:01.931967 containerd[1509]: time="2025-02-13T19:29:01.930494599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:3,}" Feb 13 19:29:02.285400 systemd[1]: run-netns-cni\x2d35ee3835\x2d1f45\x2d36cd\x2de710\x2ddb786855b069.mount: Deactivated successfully. Feb 13 19:29:02.285508 systemd[1]: run-netns-cni\x2dd9279d7f\x2d128f\x2db609\x2d2832\x2d371541fcea42.mount: Deactivated successfully. Feb 13 19:29:02.441421 containerd[1509]: time="2025-02-13T19:29:02.440944681Z" level=error msg="Failed to destroy network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.441592 containerd[1509]: time="2025-02-13T19:29:02.441541785Z" level=error msg="encountered an error cleaning up failed sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.441624 containerd[1509]: time="2025-02-13T19:29:02.441590366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.441950 kubelet[2633]: E0213 19:29:02.441911 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.442096 kubelet[2633]: E0213 19:29:02.441970 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:29:02.442176 kubelet[2633]: E0213 19:29:02.442102 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:29:02.442235 kubelet[2633]: E0213 19:29:02.442203 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" podUID="5a1448e5-fc1e-42d8-9fd7-25807931cfd4" Feb 13 19:29:02.460680 containerd[1509]: time="2025-02-13T19:29:02.460614358Z" level=error msg="Failed to destroy network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.461232 containerd[1509]: time="2025-02-13T19:29:02.461184281Z" level=error msg="encountered an error cleaning up failed sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.462642 containerd[1509]: time="2025-02-13T19:29:02.462168324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.463033 kubelet[2633]: E0213 19:29:02.463001 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.463098 kubelet[2633]: E0213 19:29:02.463047 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:29:02.463098 kubelet[2633]: E0213 19:29:02.463069 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:29:02.463167 kubelet[2633]: E0213 19:29:02.463103 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" podUID="57e05ce1-cab8-4450-a70c-4775184ae13e" Feb 13 19:29:02.465669 containerd[1509]: time="2025-02-13T19:29:02.465406439Z" level=error msg="Failed to destroy network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.467275 containerd[1509]: time="2025-02-13T19:29:02.467242846Z" level=error msg="encountered an error cleaning up failed sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.467731 containerd[1509]: time="2025-02-13T19:29:02.467662045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.467949 kubelet[2633]: E0213 19:29:02.467903 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.468042 kubelet[2633]: E0213 19:29:02.467970 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:29:02.468042 kubelet[2633]: E0213 19:29:02.467992 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:29:02.468146 kubelet[2633]: E0213 19:29:02.468036 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" podUID="c94995b2-7cbe-4295-8d76-ae0d1a49f166" Feb 13 19:29:02.475524 containerd[1509]: time="2025-02-13T19:29:02.475478349Z" level=error msg="Failed to destroy network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.476034 containerd[1509]: time="2025-02-13T19:29:02.476012244Z" level=error msg="encountered an error cleaning up failed sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.476151 containerd[1509]: time="2025-02-13T19:29:02.476132540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.476412 kubelet[2633]: E0213 19:29:02.476375 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.476542 kubelet[2633]: E0213 19:29:02.476502 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:29:02.476604 kubelet[2633]: E0213 19:29:02.476591 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:29:02.476710 kubelet[2633]: E0213 19:29:02.476675 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6nvb_calico-system(5eaecbe6-c19b-4299-995d-b27991011c1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6nvb_calico-system(5eaecbe6-c19b-4299-995d-b27991011c1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:29:02.479451 containerd[1509]: time="2025-02-13T19:29:02.479410861Z" level=error msg="Failed to destroy network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.479859 containerd[1509]: time="2025-02-13T19:29:02.479827426Z" level=error msg="encountered an error cleaning up failed sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.479909 containerd[1509]: time="2025-02-13T19:29:02.479895424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.480113 kubelet[2633]: E0213 19:29:02.480083 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.480152 kubelet[2633]: E0213 19:29:02.480124 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:29:02.480152 kubelet[2633]: E0213 19:29:02.480140 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:29:02.480208 kubelet[2633]: E0213 19:29:02.480179 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t8v5n" podUID="95422681-2fb6-4df9-b5da-8fadfe907d26" Feb 13 19:29:02.481018 containerd[1509]: time="2025-02-13T19:29:02.480966119Z" level=error msg="Failed to destroy network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.481571 containerd[1509]: time="2025-02-13T19:29:02.481547343Z" level=error msg="encountered an error cleaning up failed sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.481680 containerd[1509]: time="2025-02-13T19:29:02.481660406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.481961 kubelet[2633]: E0213 19:29:02.481931 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:02.482025 kubelet[2633]: E0213 19:29:02.481961 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:29:02.482025 kubelet[2633]: E0213 19:29:02.481980 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:29:02.482025 kubelet[2633]: E0213 19:29:02.482014 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lc7rd" podUID="bdf24e1d-a6d5-42ea-b368-ab29d0b4f983" Feb 13 19:29:02.931165 kubelet[2633]: I0213 19:29:02.930562 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169" Feb 13 19:29:02.931659 containerd[1509]: time="2025-02-13T19:29:02.931320178Z" level=info msg="StopPodSandbox for \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\"" Feb 13 19:29:02.931659 containerd[1509]: time="2025-02-13T19:29:02.931599153Z" level=info msg="Ensure that sandbox 110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169 in task-service has been cleanup successfully" Feb 13 19:29:02.931986 containerd[1509]: time="2025-02-13T19:29:02.931881364Z" level=info msg="TearDown network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\" successfully" Feb 13 19:29:02.931986 containerd[1509]: time="2025-02-13T19:29:02.931897385Z" level=info msg="StopPodSandbox for \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\" returns successfully" Feb 13 19:29:02.932350 containerd[1509]: time="2025-02-13T19:29:02.932328096Z" level=info msg="StopPodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\"" Feb 13 19:29:02.932428 containerd[1509]: time="2025-02-13T19:29:02.932409338Z" level=info msg="TearDown network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" successfully" Feb 13 19:29:02.932458 containerd[1509]: time="2025-02-13T19:29:02.932427362Z" level=info msg="StopPodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" returns successfully" Feb 13 19:29:02.932711 containerd[1509]: time="2025-02-13T19:29:02.932680428Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\"" Feb 13 19:29:02.932845 containerd[1509]: time="2025-02-13T19:29:02.932800785Z" level=info msg="TearDown network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" successfully" Feb 13 19:29:02.932845 containerd[1509]: time="2025-02-13T19:29:02.932813929Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" returns successfully" Feb 13 19:29:02.933119 containerd[1509]: time="2025-02-13T19:29:02.933099116Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" Feb 13 19:29:02.933197 containerd[1509]: time="2025-02-13T19:29:02.933180450Z" level=info msg="TearDown network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" successfully" Feb 13 19:29:02.933197 containerd[1509]: time="2025-02-13T19:29:02.933195207Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" returns successfully" Feb 13 19:29:02.933664 containerd[1509]: time="2025-02-13T19:29:02.933607523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:29:03.223781 kubelet[2633]: I0213 19:29:03.223659 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c" Feb 13 19:29:03.224680 containerd[1509]: time="2025-02-13T19:29:03.224512103Z" level=info msg="StopPodSandbox for \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\"" Feb 13 19:29:03.224783 containerd[1509]: time="2025-02-13T19:29:03.224748137Z" level=info msg="Ensure that sandbox 9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c in task-service has been cleanup successfully" Feb 13 19:29:03.225559 containerd[1509]: time="2025-02-13T19:29:03.225531392Z" level=info msg="TearDown network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\" successfully" Feb 13 19:29:03.225878 containerd[1509]: time="2025-02-13T19:29:03.225814655Z" level=info msg="StopPodSandbox for \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\" returns successfully" Feb 13 19:29:03.226332 containerd[1509]: time="2025-02-13T19:29:03.226298725Z" level=info msg="StopPodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\"" Feb 13 19:29:03.226522 containerd[1509]: time="2025-02-13T19:29:03.226507328Z" level=info msg="TearDown network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" successfully" Feb 13 19:29:03.226581 kubelet[2633]: I0213 19:29:03.226539 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833" Feb 13 19:29:03.226689 containerd[1509]: time="2025-02-13T19:29:03.226642342Z" level=info msg="StopPodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" returns successfully" Feb 13 19:29:03.227119 containerd[1509]: time="2025-02-13T19:29:03.227073393Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\"" Feb 13 19:29:03.227221 containerd[1509]: time="2025-02-13T19:29:03.227176377Z" level=info msg="TearDown network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" successfully" Feb 13 19:29:03.227221 containerd[1509]: time="2025-02-13T19:29:03.227192878Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" returns successfully" Feb 13 19:29:03.227476 containerd[1509]: time="2025-02-13T19:29:03.227337831Z" level=info msg="StopPodSandbox for \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\"" Feb 13 19:29:03.227539 containerd[1509]: time="2025-02-13T19:29:03.227508552Z" level=info msg="Ensure that sandbox 794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833 in task-service has been cleanup successfully" Feb 13 19:29:03.228040 containerd[1509]: time="2025-02-13T19:29:03.228002011Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" Feb 13 19:29:03.228245 containerd[1509]: time="2025-02-13T19:29:03.228223668Z" level=info msg="TearDown network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" successfully" Feb 13 19:29:03.228398 containerd[1509]: time="2025-02-13T19:29:03.228241913Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" returns successfully" Feb 13 19:29:03.228901 containerd[1509]: time="2025-02-13T19:29:03.228877268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:4,}" Feb 13 19:29:03.229287 containerd[1509]: time="2025-02-13T19:29:03.229252284Z" level=info msg="TearDown network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\" successfully" Feb 13 19:29:03.229414 containerd[1509]: time="2025-02-13T19:29:03.229324349Z" level=info msg="StopPodSandbox for \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\" returns successfully" Feb 13 19:29:03.229449 kubelet[2633]: I0213 19:29:03.229348 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49" Feb 13 19:29:03.229716 containerd[1509]: time="2025-02-13T19:29:03.229689878Z" level=info msg="StopPodSandbox for \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\"" Feb 13 19:29:03.229933 containerd[1509]: time="2025-02-13T19:29:03.229902407Z" level=info msg="StopPodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\"" Feb 13 19:29:03.230058 containerd[1509]: time="2025-02-13T19:29:03.229999050Z" level=info msg="TearDown network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" successfully" Feb 13 19:29:03.230058 containerd[1509]: time="2025-02-13T19:29:03.230030048Z" level=info msg="StopPodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" returns successfully" Feb 13 19:29:03.230142 containerd[1509]: time="2025-02-13T19:29:03.230082827Z" level=info msg="Ensure that sandbox e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49 in task-service has been cleanup successfully" Feb 13 19:29:03.230376 containerd[1509]: time="2025-02-13T19:29:03.230341744Z" level=info msg="TearDown network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\" successfully" Feb 13 19:29:03.230412 containerd[1509]: time="2025-02-13T19:29:03.230383233Z" level=info msg="StopPodSandbox for \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\" returns successfully" Feb 13 19:29:03.230936 containerd[1509]: time="2025-02-13T19:29:03.230840854Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\"" Feb 13 19:29:03.230988 containerd[1509]: time="2025-02-13T19:29:03.230963064Z" level=info msg="TearDown network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" successfully" Feb 13 19:29:03.231027 containerd[1509]: time="2025-02-13T19:29:03.230987500Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" returns successfully" Feb 13 19:29:03.231148 containerd[1509]: time="2025-02-13T19:29:03.231070986Z" level=info msg="StopPodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\"" Feb 13 19:29:03.231183 containerd[1509]: time="2025-02-13T19:29:03.231165404Z" level=info msg="TearDown network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" successfully" Feb 13 19:29:03.231207 containerd[1509]: time="2025-02-13T19:29:03.231190992Z" level=info msg="StopPodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" returns successfully" Feb 13 19:29:03.231639 containerd[1509]: time="2025-02-13T19:29:03.231616333Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\"" Feb 13 19:29:03.231914 containerd[1509]: time="2025-02-13T19:29:03.231712194Z" level=info msg="TearDown network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" successfully" Feb 13 19:29:03.231914 containerd[1509]: time="2025-02-13T19:29:03.231730428Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" returns successfully" Feb 13 19:29:03.231914 containerd[1509]: time="2025-02-13T19:29:03.231729145Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" Feb 13 19:29:03.231914 containerd[1509]: time="2025-02-13T19:29:03.231859371Z" level=info msg="TearDown network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" successfully" Feb 13 19:29:03.231914 containerd[1509]: time="2025-02-13T19:29:03.231869540Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" returns successfully" Feb 13 19:29:03.232271 containerd[1509]: time="2025-02-13T19:29:03.232254525Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" Feb 13 19:29:03.232522 containerd[1509]: time="2025-02-13T19:29:03.232258632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:29:03.232522 containerd[1509]: time="2025-02-13T19:29:03.232381724Z" level=info msg="TearDown network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" successfully" Feb 13 19:29:03.232522 containerd[1509]: time="2025-02-13T19:29:03.232391793Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" returns successfully" Feb 13 19:29:03.232642 kubelet[2633]: E0213 19:29:03.232537 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:03.233029 containerd[1509]: time="2025-02-13T19:29:03.233006119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:4,}" Feb 13 19:29:03.233111 kubelet[2633]: I0213 19:29:03.233084 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c" Feb 13 19:29:03.234075 containerd[1509]: time="2025-02-13T19:29:03.233738136Z" level=info msg="StopPodSandbox for \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\"" Feb 13 19:29:03.234075 containerd[1509]: time="2025-02-13T19:29:03.233935557Z" level=info msg="Ensure that sandbox 5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c in task-service has been cleanup successfully" Feb 13 19:29:03.235320 containerd[1509]: time="2025-02-13T19:29:03.234918898Z" level=info msg="TearDown network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\" successfully" Feb 13 19:29:03.235320 containerd[1509]: time="2025-02-13T19:29:03.234942863Z" level=info msg="StopPodSandbox for \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\" returns successfully" Feb 13 19:29:03.235614 containerd[1509]: time="2025-02-13T19:29:03.235549636Z" level=info msg="StopPodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\"" Feb 13 19:29:03.235647 containerd[1509]: time="2025-02-13T19:29:03.235636859Z" level=info msg="TearDown network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" successfully" Feb 13 19:29:03.235677 containerd[1509]: time="2025-02-13T19:29:03.235651928Z" level=info msg="StopPodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" returns successfully" Feb 13 19:29:03.236518 containerd[1509]: time="2025-02-13T19:29:03.236497259Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\"" Feb 13 19:29:03.236667 containerd[1509]: time="2025-02-13T19:29:03.236648684Z" level=info msg="TearDown network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" successfully" Feb 13 19:29:03.236720 containerd[1509]: time="2025-02-13T19:29:03.236708978Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" returns successfully" Feb 13 19:29:03.237637 containerd[1509]: time="2025-02-13T19:29:03.237557324Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" Feb 13 19:29:03.237720 containerd[1509]: time="2025-02-13T19:29:03.237660057Z" level=info msg="TearDown network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" successfully" Feb 13 19:29:03.237720 containerd[1509]: time="2025-02-13T19:29:03.237680687Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" returns successfully" Feb 13 19:29:03.239102 kubelet[2633]: E0213 19:29:03.238962 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:03.239273 containerd[1509]: time="2025-02-13T19:29:03.239247926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:4,}" Feb 13 19:29:03.239669 kubelet[2633]: I0213 19:29:03.239651 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769" Feb 13 19:29:03.240235 containerd[1509]: time="2025-02-13T19:29:03.240217059Z" level=info msg="StopPodSandbox for \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\"" Feb 13 19:29:03.240493 containerd[1509]: time="2025-02-13T19:29:03.240476427Z" level=info msg="Ensure that sandbox ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769 in task-service has been cleanup successfully" Feb 13 19:29:03.240737 containerd[1509]: time="2025-02-13T19:29:03.240718543Z" level=info msg="TearDown network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\" successfully" Feb 13 19:29:03.240810 containerd[1509]: time="2025-02-13T19:29:03.240797001Z" level=info msg="StopPodSandbox for \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\" returns successfully" Feb 13 19:29:03.241154 containerd[1509]: time="2025-02-13T19:29:03.241132242Z" level=info msg="StopPodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\"" Feb 13 19:29:03.241246 containerd[1509]: time="2025-02-13T19:29:03.241227150Z" level=info msg="TearDown network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" successfully" Feb 13 19:29:03.241271 containerd[1509]: time="2025-02-13T19:29:03.241245716Z" level=info msg="StopPodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" returns successfully" Feb 13 19:29:03.241546 containerd[1509]: time="2025-02-13T19:29:03.241522255Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\"" Feb 13 19:29:03.241681 containerd[1509]: time="2025-02-13T19:29:03.241657470Z" level=info msg="TearDown network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" successfully" Feb 13 19:29:03.241981 containerd[1509]: time="2025-02-13T19:29:03.241678621Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" returns successfully" Feb 13 19:29:03.242219 containerd[1509]: time="2025-02-13T19:29:03.242184913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:3,}" Feb 13 19:29:03.285077 systemd[1]: run-netns-cni\x2d4d84022f\x2d18f3\x2dbd60\x2d9e85\x2d50af1b8ded64.mount: Deactivated successfully. Feb 13 19:29:03.285601 systemd[1]: run-netns-cni\x2d54ba0418\x2d4bc0\x2de0c1\x2dff68\x2de632356ceb71.mount: Deactivated successfully. Feb 13 19:29:03.285699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c-shm.mount: Deactivated successfully. Feb 13 19:29:03.285812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49-shm.mount: Deactivated successfully. Feb 13 19:29:03.285914 systemd[1]: run-netns-cni\x2df8b0f063\x2d84bf\x2d3cd0\x2dfc8b\x2d40d2ad534b55.mount: Deactivated successfully. Feb 13 19:29:03.285986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169-shm.mount: Deactivated successfully. Feb 13 19:29:03.286058 systemd[1]: run-netns-cni\x2d07bb6035\x2de634\x2d7f29\x2dd820\x2dc1ec418c889e.mount: Deactivated successfully. Feb 13 19:29:03.286127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833-shm.mount: Deactivated successfully. Feb 13 19:29:04.338704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516146183.mount: Deactivated successfully. Feb 13 19:29:05.563792 containerd[1509]: time="2025-02-13T19:29:05.562936519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:05.587797 containerd[1509]: time="2025-02-13T19:29:05.585431064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:29:05.601981 containerd[1509]: time="2025-02-13T19:29:05.601925923Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:05.640635 containerd[1509]: time="2025-02-13T19:29:05.640593812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:05.641261 containerd[1509]: time="2025-02-13T19:29:05.641069858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.76812313s" Feb 13 19:29:05.641261 containerd[1509]: time="2025-02-13T19:29:05.641112849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:29:05.657322 containerd[1509]: time="2025-02-13T19:29:05.657276545Z" level=info msg="CreateContainer within sandbox \"ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:29:05.678073 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:39950.service - OpenSSH per-connection server daemon (10.0.0.1:39950). Feb 13 19:29:05.685297 containerd[1509]: time="2025-02-13T19:29:05.685217687Z" level=info msg="CreateContainer within sandbox \"ac8c747512aa70eb33cd6bd300bca8c03af367f901c80695f2edef7a8f19ff1a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66ce0fd54ace530b27436313252b11953bf05a04d56144f19e935805a3fd986b\"" Feb 13 19:29:05.689710 containerd[1509]: time="2025-02-13T19:29:05.688355409Z" level=info msg="StartContainer for \"66ce0fd54ace530b27436313252b11953bf05a04d56144f19e935805a3fd986b\"" Feb 13 19:29:05.739375 containerd[1509]: time="2025-02-13T19:29:05.739321974Z" level=error msg="Failed to destroy network for sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.741743 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 39950 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:05.745064 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:05.748794 containerd[1509]: time="2025-02-13T19:29:05.748737576Z" level=error msg="Failed to destroy network for sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.749598 containerd[1509]: time="2025-02-13T19:29:05.749547509Z" level=error msg="encountered an error cleaning up failed sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.749663 containerd[1509]: time="2025-02-13T19:29:05.749631558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.749912 kubelet[2633]: E0213 19:29:05.749867 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.750225 kubelet[2633]: E0213 19:29:05.749932 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:29:05.750225 kubelet[2633]: E0213 19:29:05.749954 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" Feb 13 19:29:05.750225 kubelet[2633]: E0213 19:29:05.749990 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-l74z9_calico-apiserver(c94995b2-7cbe-4295-8d76-ae0d1a49f166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" podUID="c94995b2-7cbe-4295-8d76-ae0d1a49f166" Feb 13 19:29:05.751453 systemd-logind[1491]: New session 10 of user core. Feb 13 19:29:05.752488 containerd[1509]: time="2025-02-13T19:29:05.752344682Z" level=error msg="Failed to destroy network for sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.752999 containerd[1509]: time="2025-02-13T19:29:05.752927848Z" level=error msg="encountered an error cleaning up failed sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.753151 containerd[1509]: time="2025-02-13T19:29:05.753009763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.753200 kubelet[2633]: E0213 19:29:05.753174 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.753231 kubelet[2633]: E0213 19:29:05.753203 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:29:05.753231 kubelet[2633]: E0213 19:29:05.753221 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" Feb 13 19:29:05.753287 kubelet[2633]: E0213 19:29:05.753248 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c445f8fb-sdsg2_calico-apiserver(57e05ce1-cab8-4450-a70c-4775184ae13e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" podUID="57e05ce1-cab8-4450-a70c-4775184ae13e" Feb 13 19:29:05.755460 containerd[1509]: time="2025-02-13T19:29:05.754857669Z" level=error msg="encountered an error cleaning up failed sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.755460 containerd[1509]: time="2025-02-13T19:29:05.754939233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.755530 kubelet[2633]: E0213 19:29:05.755153 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.755530 kubelet[2633]: E0213 19:29:05.755222 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:29:05.755530 kubelet[2633]: E0213 19:29:05.755237 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lc7rd" Feb 13 19:29:05.755664 kubelet[2633]: E0213 19:29:05.755276 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lc7rd_kube-system(bdf24e1d-a6d5-42ea-b368-ab29d0b4f983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lc7rd" podUID="bdf24e1d-a6d5-42ea-b368-ab29d0b4f983" Feb 13 19:29:05.758187 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:29:05.761723 containerd[1509]: time="2025-02-13T19:29:05.761673641Z" level=error msg="Failed to destroy network for sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.764834 containerd[1509]: time="2025-02-13T19:29:05.764248494Z" level=error msg="encountered an error cleaning up failed sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.764986 containerd[1509]: time="2025-02-13T19:29:05.764955694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.765503 kubelet[2633]: E0213 19:29:05.765464 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.765554 kubelet[2633]: E0213 19:29:05.765518 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:29:05.765554 kubelet[2633]: E0213 19:29:05.765537 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t8v5n" Feb 13 19:29:05.765735 kubelet[2633]: E0213 19:29:05.765564 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t8v5n_kube-system(95422681-2fb6-4df9-b5da-8fadfe907d26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t8v5n" podUID="95422681-2fb6-4df9-b5da-8fadfe907d26" Feb 13 19:29:05.769171 containerd[1509]: time="2025-02-13T19:29:05.769025310Z" level=error msg="Failed to destroy network for sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.769541 containerd[1509]: time="2025-02-13T19:29:05.769494102Z" level=error msg="encountered an error cleaning up failed sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.769752 containerd[1509]: time="2025-02-13T19:29:05.769560897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.769830 kubelet[2633]: E0213 19:29:05.769749 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.769858 kubelet[2633]: E0213 19:29:05.769835 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:29:05.769858 kubelet[2633]: E0213 19:29:05.769852 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" Feb 13 19:29:05.770959 kubelet[2633]: E0213 19:29:05.769901 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b67b658d9-8gkf8_calico-system(5a1448e5-fc1e-42d8-9fd7-25807931cfd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" podUID="5a1448e5-fc1e-42d8-9fd7-25807931cfd4" Feb 13 19:29:05.776496 containerd[1509]: time="2025-02-13T19:29:05.776374865Z" level=error msg="Failed to destroy network for sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.776817 containerd[1509]: time="2025-02-13T19:29:05.776792491Z" level=error msg="encountered an error cleaning up failed sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.776881 containerd[1509]: time="2025-02-13T19:29:05.776853846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.777023 kubelet[2633]: E0213 19:29:05.777001 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:29:05.777136 kubelet[2633]: E0213 19:29:05.777102 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:29:05.777136 kubelet[2633]: E0213 19:29:05.777125 2633 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6nvb" Feb 13 19:29:05.777281 kubelet[2633]: E0213 19:29:05.777158 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6nvb_calico-system(5eaecbe6-c19b-4299-995d-b27991011c1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6nvb_calico-system(5eaecbe6-c19b-4299-995d-b27991011c1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6nvb" podUID="5eaecbe6-c19b-4299-995d-b27991011c1a" Feb 13 19:29:05.804900 systemd[1]: Started cri-containerd-66ce0fd54ace530b27436313252b11953bf05a04d56144f19e935805a3fd986b.scope - libcontainer container 66ce0fd54ace530b27436313252b11953bf05a04d56144f19e935805a3fd986b. Feb 13 19:29:05.840632 containerd[1509]: time="2025-02-13T19:29:05.840511435Z" level=info msg="StartContainer for \"66ce0fd54ace530b27436313252b11953bf05a04d56144f19e935805a3fd986b\" returns successfully" Feb 13 19:29:05.902128 sshd[4471]: Connection closed by 10.0.0.1 port 39950 Feb 13 19:29:05.903793 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:05.907326 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:39950.service: Deactivated successfully. Feb 13 19:29:05.909607 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:29:05.911528 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:29:05.912404 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:29:05.912507 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:29:05.913363 systemd-logind[1491]: Removed session 10. Feb 13 19:29:06.247450 kubelet[2633]: I0213 19:29:06.247327 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b" Feb 13 19:29:06.247844 containerd[1509]: time="2025-02-13T19:29:06.247810847Z" level=info msg="StopPodSandbox for \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\"" Feb 13 19:29:06.248132 containerd[1509]: time="2025-02-13T19:29:06.248108688Z" level=info msg="Ensure that sandbox c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b in task-service has been cleanup successfully" Feb 13 19:29:06.248308 containerd[1509]: time="2025-02-13T19:29:06.248281663Z" level=info msg="TearDown network for sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\" successfully" Feb 13 19:29:06.248308 containerd[1509]: time="2025-02-13T19:29:06.248296140Z" level=info msg="StopPodSandbox for \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\" returns successfully" Feb 13 19:29:06.248715 containerd[1509]: time="2025-02-13T19:29:06.248664072Z" level=info msg="StopPodSandbox for \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\"" Feb 13 19:29:06.248920 containerd[1509]: time="2025-02-13T19:29:06.248838109Z" level=info msg="TearDown network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\" successfully" Feb 13 19:29:06.248920 containerd[1509]: time="2025-02-13T19:29:06.248914193Z" level=info msg="StopPodSandbox for \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\" returns successfully" Feb 13 19:29:06.249254 containerd[1509]: time="2025-02-13T19:29:06.249184341Z" level=info msg="StopPodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\"" Feb 13 19:29:06.249308 containerd[1509]: time="2025-02-13T19:29:06.249288637Z" level=info msg="TearDown network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" successfully" Feb 13 19:29:06.249308 containerd[1509]: time="2025-02-13T19:29:06.249301421Z" level=info msg="StopPodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" returns successfully" Feb 13 19:29:06.249598 containerd[1509]: time="2025-02-13T19:29:06.249572450Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\"" Feb 13 19:29:06.249699 containerd[1509]: time="2025-02-13T19:29:06.249679492Z" level=info msg="TearDown network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" successfully" Feb 13 19:29:06.249753 containerd[1509]: time="2025-02-13T19:29:06.249696975Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" returns successfully" Feb 13 19:29:06.250028 kubelet[2633]: I0213 19:29:06.249955 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7" Feb 13 19:29:06.250555 containerd[1509]: time="2025-02-13T19:29:06.250515274Z" level=info msg="StopPodSandbox for \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\"" Feb 13 19:29:06.250743 containerd[1509]: time="2025-02-13T19:29:06.250723606Z" level=info msg="Ensure that sandbox dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7 in task-service has been cleanup successfully" Feb 13 19:29:06.250985 containerd[1509]: time="2025-02-13T19:29:06.250958347Z" level=info msg="TearDown network for sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\" successfully" Feb 13 19:29:06.250985 containerd[1509]: time="2025-02-13T19:29:06.250980398Z" level=info msg="StopPodSandbox for \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\" returns successfully" Feb 13 19:29:06.251156 containerd[1509]: time="2025-02-13T19:29:06.250994965Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" Feb 13 19:29:06.251156 containerd[1509]: time="2025-02-13T19:29:06.251089023Z" level=info msg="TearDown network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" successfully" Feb 13 19:29:06.251156 containerd[1509]: time="2025-02-13T19:29:06.251098741Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" returns successfully" Feb 13 19:29:06.251556 containerd[1509]: time="2025-02-13T19:29:06.251526395Z" level=info msg="StopPodSandbox for \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\"" Feb 13 19:29:06.251728 containerd[1509]: time="2025-02-13T19:29:06.251540342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:29:06.251901 containerd[1509]: time="2025-02-13T19:29:06.251832661Z" level=info msg="TearDown network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\" successfully" Feb 13 19:29:06.251901 containerd[1509]: time="2025-02-13T19:29:06.251853811Z" level=info msg="StopPodSandbox for \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\" returns successfully" Feb 13 19:29:06.252270 containerd[1509]: time="2025-02-13T19:29:06.252239577Z" level=info msg="StopPodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\"" Feb 13 19:29:06.252373 containerd[1509]: time="2025-02-13T19:29:06.252341428Z" level=info msg="TearDown network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" successfully" Feb 13 19:29:06.252373 containerd[1509]: time="2025-02-13T19:29:06.252361395Z" level=info msg="StopPodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" returns successfully" Feb 13 19:29:06.252821 containerd[1509]: time="2025-02-13T19:29:06.252794541Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\"" Feb 13 19:29:06.252951 containerd[1509]: time="2025-02-13T19:29:06.252895089Z" level=info msg="TearDown network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" successfully" Feb 13 19:29:06.252993 containerd[1509]: time="2025-02-13T19:29:06.252950895Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" returns successfully" Feb 13 19:29:06.253210 containerd[1509]: time="2025-02-13T19:29:06.253178493Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" Feb 13 19:29:06.253335 containerd[1509]: time="2025-02-13T19:29:06.253276156Z" level=info msg="TearDown network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" successfully" Feb 13 19:29:06.253335 containerd[1509]: time="2025-02-13T19:29:06.253329637Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" returns successfully" Feb 13 19:29:06.253982 kubelet[2633]: I0213 19:29:06.253941 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1" Feb 13 19:29:06.254410 kubelet[2633]: E0213 19:29:06.254388 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:06.254465 containerd[1509]: time="2025-02-13T19:29:06.254446106Z" level=info msg="StopPodSandbox for \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\"" Feb 13 19:29:06.254790 containerd[1509]: time="2025-02-13T19:29:06.254637516Z" level=info msg="Ensure that sandbox 90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1 in task-service has been cleanup successfully" Feb 13 19:29:06.254790 containerd[1509]: time="2025-02-13T19:29:06.254727135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:5,}" Feb 13 19:29:06.254969 containerd[1509]: time="2025-02-13T19:29:06.254865826Z" level=info msg="TearDown network for sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\" successfully" Feb 13 19:29:06.254969 containerd[1509]: time="2025-02-13T19:29:06.254883339Z" level=info msg="StopPodSandbox for \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\" returns successfully" Feb 13 19:29:06.255362 containerd[1509]: time="2025-02-13T19:29:06.255329568Z" level=info msg="StopPodSandbox for \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\"" Feb 13 19:29:06.255628 containerd[1509]: time="2025-02-13T19:29:06.255436339Z" level=info msg="TearDown network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\" successfully" Feb 13 19:29:06.255628 containerd[1509]: time="2025-02-13T19:29:06.255453241Z" level=info msg="StopPodSandbox for \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\" returns successfully" Feb 13 19:29:06.256072 containerd[1509]: time="2025-02-13T19:29:06.256037028Z" level=info msg="StopPodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\"" Feb 13 19:29:06.256719 containerd[1509]: time="2025-02-13T19:29:06.256142337Z" level=info msg="TearDown network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" successfully" Feb 13 19:29:06.256719 containerd[1509]: time="2025-02-13T19:29:06.256156764Z" level=info msg="StopPodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" returns successfully" Feb 13 19:29:06.256719 containerd[1509]: time="2025-02-13T19:29:06.256449214Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\"" Feb 13 19:29:06.256719 containerd[1509]: time="2025-02-13T19:29:06.256537309Z" level=info msg="TearDown network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" successfully" Feb 13 19:29:06.256719 containerd[1509]: time="2025-02-13T19:29:06.256550835Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" returns successfully" Feb 13 19:29:06.257209 containerd[1509]: time="2025-02-13T19:29:06.257184847Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" Feb 13 19:29:06.257290 containerd[1509]: time="2025-02-13T19:29:06.257271581Z" level=info msg="TearDown network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" successfully" Feb 13 19:29:06.257332 containerd[1509]: time="2025-02-13T19:29:06.257288002Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" returns successfully" Feb 13 19:29:06.257362 kubelet[2633]: I0213 19:29:06.257338 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764" Feb 13 19:29:06.257830 kubelet[2633]: E0213 19:29:06.257443 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:06.257890 containerd[1509]: time="2025-02-13T19:29:06.257838637Z" level=info msg="StopPodSandbox for \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\"" Feb 13 19:29:06.258091 containerd[1509]: time="2025-02-13T19:29:06.258037361Z" level=info msg="Ensure that sandbox 41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764 in task-service has been cleanup successfully" Feb 13 19:29:06.258286 containerd[1509]: time="2025-02-13T19:29:06.258161735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:5,}" Feb 13 19:29:06.258383 containerd[1509]: time="2025-02-13T19:29:06.258238379Z" level=info msg="TearDown network for sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\" successfully" Feb 13 19:29:06.258383 containerd[1509]: time="2025-02-13T19:29:06.258380867Z" level=info msg="StopPodSandbox for \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\" returns successfully" Feb 13 19:29:06.258922 containerd[1509]: time="2025-02-13T19:29:06.258892519Z" level=info msg="StopPodSandbox for \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\"" Feb 13 19:29:06.259243 containerd[1509]: time="2025-02-13T19:29:06.259154081Z" level=info msg="TearDown network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\" successfully" Feb 13 19:29:06.259243 containerd[1509]: time="2025-02-13T19:29:06.259169389Z" level=info msg="StopPodSandbox for \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\" returns successfully" Feb 13 19:29:06.259578 containerd[1509]: time="2025-02-13T19:29:06.259542873Z" level=info msg="StopPodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\"" Feb 13 19:29:06.259732 containerd[1509]: time="2025-02-13T19:29:06.259648761Z" level=info msg="TearDown network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" successfully" Feb 13 19:29:06.259732 containerd[1509]: time="2025-02-13T19:29:06.259669841Z" level=info msg="StopPodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" returns successfully" Feb 13 19:29:06.260594 containerd[1509]: time="2025-02-13T19:29:06.260570536Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\"" Feb 13 19:29:06.260947 containerd[1509]: time="2025-02-13T19:29:06.260919893Z" level=info msg="TearDown network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" successfully" Feb 13 19:29:06.260947 containerd[1509]: time="2025-02-13T19:29:06.260936804Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" returns successfully" Feb 13 19:29:06.261659 kubelet[2633]: E0213 19:29:06.261634 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:06.262516 containerd[1509]: time="2025-02-13T19:29:06.262469296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:4,}" Feb 13 19:29:06.266369 kubelet[2633]: I0213 19:29:06.266179 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2" Feb 13 19:29:06.268072 containerd[1509]: time="2025-02-13T19:29:06.268036557Z" level=info msg="StopPodSandbox for \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\"" Feb 13 19:29:06.268338 containerd[1509]: time="2025-02-13T19:29:06.268260568Z" level=info msg="Ensure that sandbox 4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2 in task-service has been cleanup successfully" Feb 13 19:29:06.268561 containerd[1509]: time="2025-02-13T19:29:06.268476935Z" level=info msg="TearDown network for sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\" successfully" Feb 13 19:29:06.268561 containerd[1509]: time="2025-02-13T19:29:06.268531427Z" level=info msg="StopPodSandbox for \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\" returns successfully" Feb 13 19:29:06.269262 containerd[1509]: time="2025-02-13T19:29:06.268949464Z" level=info msg="StopPodSandbox for \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\"" Feb 13 19:29:06.269262 containerd[1509]: time="2025-02-13T19:29:06.269198542Z" level=info msg="TearDown network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\" successfully" Feb 13 19:29:06.269262 containerd[1509]: time="2025-02-13T19:29:06.269210625Z" level=info msg="StopPodSandbox for \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\" returns successfully" Feb 13 19:29:06.269865 containerd[1509]: time="2025-02-13T19:29:06.269609275Z" level=info msg="StopPodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\"" Feb 13 19:29:06.269865 containerd[1509]: time="2025-02-13T19:29:06.269682552Z" level=info msg="TearDown network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" successfully" Feb 13 19:29:06.269865 containerd[1509]: time="2025-02-13T19:29:06.269691108Z" level=info msg="StopPodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" returns successfully" Feb 13 19:29:06.270373 containerd[1509]: time="2025-02-13T19:29:06.270323608Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\"" Feb 13 19:29:06.270437 containerd[1509]: time="2025-02-13T19:29:06.270424858Z" level=info msg="TearDown network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" successfully" Feb 13 19:29:06.270467 containerd[1509]: time="2025-02-13T19:29:06.270435819Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" returns successfully" Feb 13 19:29:06.271091 containerd[1509]: time="2025-02-13T19:29:06.271070543Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" Feb 13 19:29:06.271347 containerd[1509]: time="2025-02-13T19:29:06.271302409Z" level=info msg="TearDown network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" successfully" Feb 13 19:29:06.271403 kubelet[2633]: I0213 19:29:06.271339 2633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a" Feb 13 19:29:06.271687 containerd[1509]: time="2025-02-13T19:29:06.271554303Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" returns successfully" Feb 13 19:29:06.271926 containerd[1509]: time="2025-02-13T19:29:06.271889984Z" level=info msg="StopPodSandbox for \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\"" Feb 13 19:29:06.272232 containerd[1509]: time="2025-02-13T19:29:06.272143421Z" level=info msg="Ensure that sandbox 2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a in task-service has been cleanup successfully" Feb 13 19:29:06.272446 containerd[1509]: time="2025-02-13T19:29:06.272404702Z" level=info msg="TearDown network for sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\" successfully" Feb 13 19:29:06.272446 containerd[1509]: time="2025-02-13T19:29:06.272421694Z" level=info msg="StopPodSandbox for \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\" returns successfully" Feb 13 19:29:06.273125 containerd[1509]: time="2025-02-13T19:29:06.273000082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:29:06.273125 containerd[1509]: time="2025-02-13T19:29:06.273082137Z" level=info msg="StopPodSandbox for \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\"" Feb 13 19:29:06.273551 containerd[1509]: time="2025-02-13T19:29:06.273184228Z" level=info msg="TearDown network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\" successfully" Feb 13 19:29:06.273551 containerd[1509]: time="2025-02-13T19:29:06.273196772Z" level=info msg="StopPodSandbox for \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\" returns successfully" Feb 13 19:29:06.274701 containerd[1509]: time="2025-02-13T19:29:06.274638824Z" level=info msg="StopPodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\"" Feb 13 19:29:06.275109 containerd[1509]: time="2025-02-13T19:29:06.275074143Z" level=info msg="TearDown network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" successfully" Feb 13 19:29:06.275267 containerd[1509]: time="2025-02-13T19:29:06.275194820Z" level=info msg="StopPodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" returns successfully" Feb 13 19:29:06.275692 containerd[1509]: time="2025-02-13T19:29:06.275667629Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\"" Feb 13 19:29:06.275837 containerd[1509]: time="2025-02-13T19:29:06.275793435Z" level=info msg="TearDown network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" successfully" Feb 13 19:29:06.275837 containerd[1509]: time="2025-02-13T19:29:06.275811790Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" returns successfully" Feb 13 19:29:06.276142 containerd[1509]: time="2025-02-13T19:29:06.276114139Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" Feb 13 19:29:06.276212 containerd[1509]: time="2025-02-13T19:29:06.276202585Z" level=info msg="TearDown network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" successfully" Feb 13 19:29:06.276236 containerd[1509]: time="2025-02-13T19:29:06.276216091Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" returns successfully" Feb 13 19:29:06.277964 containerd[1509]: time="2025-02-13T19:29:06.277909756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:5,}" Feb 13 19:29:06.279283 kubelet[2633]: I0213 19:29:06.279219 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m8v5l" podStartSLOduration=1.751123829 podStartE2EDuration="18.279202848s" podCreationTimestamp="2025-02-13 19:28:48 +0000 UTC" firstStartedPulling="2025-02-13 19:28:49.115736895 +0000 UTC m=+15.402756828" lastFinishedPulling="2025-02-13 19:29:05.643815924 +0000 UTC m=+31.930835847" observedRunningTime="2025-02-13 19:29:06.277350434 +0000 UTC m=+32.564370367" watchObservedRunningTime="2025-02-13 19:29:06.279202848 +0000 UTC m=+32.566222781" Feb 13 19:29:06.556908 systemd-networkd[1442]: cali77830174835: Link UP Feb 13 19:29:06.557272 systemd-networkd[1442]: cali77830174835: Gained carrier Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.329 [INFO][4545] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.360 [INFO][4545] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0 calico-apiserver-6c445f8fb- calico-apiserver 57e05ce1-cab8-4450-a70c-4775184ae13e 695 0 2025-02-13 19:28:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c445f8fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c445f8fb-sdsg2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77830174835 [] []}} ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-sdsg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.360 [INFO][4545] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-sdsg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.506 [INFO][4647] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" HandleID="k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Workload="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.519 [INFO][4647] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" HandleID="k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Workload="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c445f8fb-sdsg2", "timestamp":"2025-02-13 19:29:06.506483286 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4647] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.521 [INFO][4647] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.525 [INFO][4647] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.530 [INFO][4647] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.533 [INFO][4647] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.535 [INFO][4647] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.535 [INFO][4647] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.536 [INFO][4647] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.540 [INFO][4647] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.546 [INFO][4647] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.546 [INFO][4647] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" host="localhost" Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.546 [INFO][4647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:06.569253 containerd[1509]: 2025-02-13 19:29:06.546 [INFO][4647] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" HandleID="k8s-pod-network.ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Workload="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" Feb 13 19:29:06.570251 containerd[1509]: 2025-02-13 19:29:06.549 [INFO][4545] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-sdsg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0", GenerateName:"calico-apiserver-6c445f8fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"57e05ce1-cab8-4450-a70c-4775184ae13e", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c445f8fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c445f8fb-sdsg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77830174835", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:06.570251 containerd[1509]: 2025-02-13 19:29:06.549 [INFO][4545] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-sdsg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" Feb 13 19:29:06.570251 containerd[1509]: 2025-02-13 19:29:06.549 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77830174835 ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-sdsg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" Feb 13 19:29:06.570251 containerd[1509]: 2025-02-13 19:29:06.557 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-sdsg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" Feb 13 19:29:06.570251 containerd[1509]: 2025-02-13 19:29:06.558 [INFO][4545] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-sdsg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0", GenerateName:"calico-apiserver-6c445f8fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"57e05ce1-cab8-4450-a70c-4775184ae13e", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c445f8fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d", Pod:"calico-apiserver-6c445f8fb-sdsg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77830174835", MAC:"56:78:fc:4a:eb:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:06.570251 containerd[1509]: 2025-02-13 19:29:06.566 [INFO][4545] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-sdsg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--sdsg2-eth0" Feb 13 19:29:06.583416 systemd[1]: run-netns-cni\x2de0c4b507\x2dd4b3\x2de10f\x2d667e\x2daee200dc15eb.mount: Deactivated successfully. Feb 13 19:29:06.583898 systemd[1]: run-netns-cni\x2dcd48fbdb\x2d7313\x2db659\x2dfe8e\x2d26de554c58f7.mount: Deactivated successfully. Feb 13 19:29:06.583985 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764-shm.mount: Deactivated successfully. Feb 13 19:29:06.584062 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1-shm.mount: Deactivated successfully. Feb 13 19:29:06.584144 systemd[1]: run-netns-cni\x2d9b6901a7\x2d9544\x2d84d0\x2d37f2\x2d2112e0673705.mount: Deactivated successfully. Feb 13 19:29:06.584221 systemd[1]: run-netns-cni\x2dfc38eea9\x2d3aae\x2deec6\x2dd3cc\x2dd39d3cb3c778.mount: Deactivated successfully. Feb 13 19:29:06.584291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7-shm.mount: Deactivated successfully. Feb 13 19:29:06.584369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b-shm.mount: Deactivated successfully. Feb 13 19:29:06.584447 systemd[1]: run-netns-cni\x2dd128c88b\x2d5ddd\x2d5c18\x2dba0c\x2d1bde1078fdf8.mount: Deactivated successfully. Feb 13 19:29:06.584524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a-shm.mount: Deactivated successfully. Feb 13 19:29:06.584603 systemd[1]: run-netns-cni\x2dcebf6eeb\x2dafd4\x2d40c9\x2dedb4\x2df9c4bcce0bf1.mount: Deactivated successfully. Feb 13 19:29:06.584687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2-shm.mount: Deactivated successfully. Feb 13 19:29:06.614517 containerd[1509]: time="2025-02-13T19:29:06.614433264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:06.614654 containerd[1509]: time="2025-02-13T19:29:06.614500590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:06.614654 containerd[1509]: time="2025-02-13T19:29:06.614521760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:06.614654 containerd[1509]: time="2025-02-13T19:29:06.614635865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:06.646951 systemd[1]: Started cri-containerd-ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d.scope - libcontainer container ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d. Feb 13 19:29:06.648554 systemd-networkd[1442]: cali936205a38c6: Link UP Feb 13 19:29:06.649548 systemd-networkd[1442]: cali936205a38c6: Gained carrier Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.387 [INFO][4573] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.399 [INFO][4573] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0 coredns-668d6bf9bc- kube-system 95422681-2fb6-4df9-b5da-8fadfe907d26 692 0 2025-02-13 19:28:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-t8v5n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali936205a38c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Namespace="kube-system" Pod="coredns-668d6bf9bc-t8v5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t8v5n-" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.400 [INFO][4573] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Namespace="kube-system" Pod="coredns-668d6bf9bc-t8v5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.507 [INFO][4655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" HandleID="k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Workload="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" HandleID="k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Workload="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044a880), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-t8v5n", "timestamp":"2025-02-13 19:29:06.50702232 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.546 [INFO][4655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.546 [INFO][4655] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.623 [INFO][4655] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.626 [INFO][4655] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.630 [INFO][4655] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.633 [INFO][4655] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.635 [INFO][4655] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.635 [INFO][4655] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.636 [INFO][4655] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189 Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.640 [INFO][4655] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.644 [INFO][4655] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.644 [INFO][4655] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" host="localhost" Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.644 [INFO][4655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:06.662143 containerd[1509]: 2025-02-13 19:29:06.644 [INFO][4655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" HandleID="k8s-pod-network.83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Workload="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" Feb 13 19:29:06.661901 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:29:06.662975 containerd[1509]: 2025-02-13 19:29:06.646 [INFO][4573] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Namespace="kube-system" Pod="coredns-668d6bf9bc-t8v5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"95422681-2fb6-4df9-b5da-8fadfe907d26", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-t8v5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali936205a38c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:06.662975 containerd[1509]: 2025-02-13 19:29:06.647 [INFO][4573] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Namespace="kube-system" Pod="coredns-668d6bf9bc-t8v5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" Feb 13 19:29:06.662975 containerd[1509]: 2025-02-13 19:29:06.647 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali936205a38c6 ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Namespace="kube-system" Pod="coredns-668d6bf9bc-t8v5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" Feb 13 19:29:06.662975 containerd[1509]: 2025-02-13 19:29:06.649 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Namespace="kube-system" Pod="coredns-668d6bf9bc-t8v5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" Feb 13 19:29:06.662975 containerd[1509]: 2025-02-13 19:29:06.649 [INFO][4573] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Namespace="kube-system" Pod="coredns-668d6bf9bc-t8v5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"95422681-2fb6-4df9-b5da-8fadfe907d26", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189", Pod:"coredns-668d6bf9bc-t8v5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali936205a38c6", MAC:"06:c9:1c:d6:3e:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:06.662975 containerd[1509]: 2025-02-13 19:29:06.659 [INFO][4573] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189" Namespace="kube-system" Pod="coredns-668d6bf9bc-t8v5n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t8v5n-eth0" Feb 13 19:29:06.684683 containerd[1509]: time="2025-02-13T19:29:06.684597545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:06.684898 containerd[1509]: time="2025-02-13T19:29:06.684666055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:06.684898 containerd[1509]: time="2025-02-13T19:29:06.684680662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:06.684898 containerd[1509]: time="2025-02-13T19:29:06.684816869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:06.694315 containerd[1509]: time="2025-02-13T19:29:06.694274075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-sdsg2,Uid:57e05ce1-cab8-4450-a70c-4775184ae13e,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d\"" Feb 13 19:29:06.696120 containerd[1509]: time="2025-02-13T19:29:06.696088297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:29:06.714948 systemd[1]: Started cri-containerd-83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189.scope - libcontainer container 83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189. Feb 13 19:29:06.728358 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:29:06.751131 systemd-networkd[1442]: cali0c61b4834ed: Link UP Feb 13 19:29:06.751818 systemd-networkd[1442]: cali0c61b4834ed: Gained carrier Feb 13 19:29:06.757839 containerd[1509]: time="2025-02-13T19:29:06.757806804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t8v5n,Uid:95422681-2fb6-4df9-b5da-8fadfe907d26,Namespace:kube-system,Attempt:5,} returns sandbox id \"83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189\"" Feb 13 19:29:06.758615 kubelet[2633]: E0213 19:29:06.758586 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:06.761187 containerd[1509]: time="2025-02-13T19:29:06.761155463Z" level=info msg="CreateContainer within sandbox \"83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.335 [INFO][4556] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.358 [INFO][4556] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0 coredns-668d6bf9bc- kube-system bdf24e1d-a6d5-42ea-b368-ab29d0b4f983 697 0 2025-02-13 19:28:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lc7rd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0c61b4834ed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Namespace="kube-system" Pod="coredns-668d6bf9bc-lc7rd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lc7rd-" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.358 [INFO][4556] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Namespace="kube-system" Pod="coredns-668d6bf9bc-lc7rd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.506 [INFO][4644] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" HandleID="k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Workload="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4644] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" HandleID="k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Workload="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5180), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lc7rd", "timestamp":"2025-02-13 19:29:06.506794983 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.644 [INFO][4644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.644 [INFO][4644] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.723 [INFO][4644] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.727 [INFO][4644] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.732 [INFO][4644] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.733 [INFO][4644] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.735 [INFO][4644] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.736 [INFO][4644] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.737 [INFO][4644] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033 Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.740 [INFO][4644] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.744 [INFO][4644] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.744 [INFO][4644] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" host="localhost" Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.744 [INFO][4644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:06.765891 containerd[1509]: 2025-02-13 19:29:06.744 [INFO][4644] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" HandleID="k8s-pod-network.6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Workload="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" Feb 13 19:29:06.766434 containerd[1509]: 2025-02-13 19:29:06.748 [INFO][4556] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Namespace="kube-system" Pod="coredns-668d6bf9bc-lc7rd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bdf24e1d-a6d5-42ea-b368-ab29d0b4f983", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lc7rd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c61b4834ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:06.766434 containerd[1509]: 2025-02-13 19:29:06.748 [INFO][4556] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Namespace="kube-system" Pod="coredns-668d6bf9bc-lc7rd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" Feb 13 19:29:06.766434 containerd[1509]: 2025-02-13 19:29:06.748 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c61b4834ed ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Namespace="kube-system" Pod="coredns-668d6bf9bc-lc7rd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" Feb 13 19:29:06.766434 containerd[1509]: 2025-02-13 19:29:06.752 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Namespace="kube-system" Pod="coredns-668d6bf9bc-lc7rd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" Feb 13 19:29:06.766434 containerd[1509]: 2025-02-13 19:29:06.752 [INFO][4556] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Namespace="kube-system" Pod="coredns-668d6bf9bc-lc7rd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bdf24e1d-a6d5-42ea-b368-ab29d0b4f983", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033", Pod:"coredns-668d6bf9bc-lc7rd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c61b4834ed", MAC:"36:cc:b8:56:6c:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:06.766434 containerd[1509]: 2025-02-13 19:29:06.763 [INFO][4556] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033" Namespace="kube-system" Pod="coredns-668d6bf9bc-lc7rd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lc7rd-eth0" Feb 13 19:29:06.790109 containerd[1509]: time="2025-02-13T19:29:06.789970667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:06.790301 containerd[1509]: time="2025-02-13T19:29:06.790078148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:06.790301 containerd[1509]: time="2025-02-13T19:29:06.790113816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:06.791016 containerd[1509]: time="2025-02-13T19:29:06.790955759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:06.797096 containerd[1509]: time="2025-02-13T19:29:06.797058668Z" level=info msg="CreateContainer within sandbox \"83c4947532ef0434ffb71294caecd78be807b4676b6ccc0e42a8869cc69c2189\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a1321de06a540484ab235da78857c39c8738d8d8cb687876c9ebe5e816fa10d\"" Feb 13 19:29:06.797622 containerd[1509]: time="2025-02-13T19:29:06.797580439Z" level=info msg="StartContainer for \"3a1321de06a540484ab235da78857c39c8738d8d8cb687876c9ebe5e816fa10d\"" Feb 13 19:29:06.807906 systemd[1]: Started cri-containerd-6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033.scope - libcontainer container 6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033. Feb 13 19:29:06.821270 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:29:06.833986 systemd[1]: Started cri-containerd-3a1321de06a540484ab235da78857c39c8738d8d8cb687876c9ebe5e816fa10d.scope - libcontainer container 3a1321de06a540484ab235da78857c39c8738d8d8cb687876c9ebe5e816fa10d. Feb 13 19:29:06.853671 containerd[1509]: time="2025-02-13T19:29:06.853590711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lc7rd,Uid:bdf24e1d-a6d5-42ea-b368-ab29d0b4f983,Namespace:kube-system,Attempt:5,} returns sandbox id \"6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033\"" Feb 13 19:29:06.854527 kubelet[2633]: E0213 19:29:06.854482 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:06.859384 containerd[1509]: time="2025-02-13T19:29:06.859348620Z" level=info msg="CreateContainer within sandbox \"6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:29:06.862945 systemd-networkd[1442]: cali1b68bfca9f3: Link UP Feb 13 19:29:06.863156 systemd-networkd[1442]: cali1b68bfca9f3: Gained carrier Feb 13 19:29:06.876199 containerd[1509]: time="2025-02-13T19:29:06.876092380Z" level=info msg="StartContainer for \"3a1321de06a540484ab235da78857c39c8738d8d8cb687876c9ebe5e816fa10d\" returns successfully" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.388 [INFO][4597] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.408 [INFO][4597] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0 calico-apiserver-6c445f8fb- calico-apiserver c94995b2-7cbe-4295-8d76-ae0d1a49f166 696 0 2025-02-13 19:28:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c445f8fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c445f8fb-l74z9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b68bfca9f3 [] []}} ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-l74z9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.408 [INFO][4597] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-l74z9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.508 [INFO][4659] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" HandleID="k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Workload="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4659] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" HandleID="k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Workload="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000482200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c445f8fb-l74z9", "timestamp":"2025-02-13 19:29:06.508843095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.520 [INFO][4659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.744 [INFO][4659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.744 [INFO][4659] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.824 [INFO][4659] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.830 [INFO][4659] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.833 [INFO][4659] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.835 [INFO][4659] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.837 [INFO][4659] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.837 [INFO][4659] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.839 [INFO][4659] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.843 [INFO][4659] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.849 [INFO][4659] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.849 [INFO][4659] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" host="localhost" Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.850 [INFO][4659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:06.876695 containerd[1509]: 2025-02-13 19:29:06.850 [INFO][4659] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" HandleID="k8s-pod-network.87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Workload="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" Feb 13 19:29:06.877332 containerd[1509]: 2025-02-13 19:29:06.858 [INFO][4597] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-l74z9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0", GenerateName:"calico-apiserver-6c445f8fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c94995b2-7cbe-4295-8d76-ae0d1a49f166", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c445f8fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c445f8fb-l74z9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b68bfca9f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:06.877332 containerd[1509]: 2025-02-13 19:29:06.858 [INFO][4597] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-l74z9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" Feb 13 19:29:06.877332 containerd[1509]: 2025-02-13 19:29:06.858 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b68bfca9f3 ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-l74z9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" Feb 13 19:29:06.877332 containerd[1509]: 2025-02-13 19:29:06.863 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-l74z9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" Feb 13 19:29:06.877332 containerd[1509]: 2025-02-13 19:29:06.864 [INFO][4597] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-l74z9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0", GenerateName:"calico-apiserver-6c445f8fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c94995b2-7cbe-4295-8d76-ae0d1a49f166", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c445f8fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec", Pod:"calico-apiserver-6c445f8fb-l74z9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b68bfca9f3", MAC:"46:da:e2:ac:e3:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:06.877332 containerd[1509]: 2025-02-13 19:29:06.872 [INFO][4597] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec" Namespace="calico-apiserver" Pod="calico-apiserver-6c445f8fb-l74z9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c445f8fb--l74z9-eth0" Feb 13 19:29:06.884940 containerd[1509]: time="2025-02-13T19:29:06.884898883Z" level=info msg="CreateContainer within sandbox \"6710b0bb8170d046c9cc8e07d2225f9dfed41cfb21973f93b129744481e49033\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9a9484dbf5a547d205c97255e10d515eee914003c1ac3ab18136ddc26b43c24\"" Feb 13 19:29:06.885407 containerd[1509]: time="2025-02-13T19:29:06.885380329Z" level=info msg="StartContainer for \"d9a9484dbf5a547d205c97255e10d515eee914003c1ac3ab18136ddc26b43c24\"" Feb 13 19:29:06.902507 containerd[1509]: time="2025-02-13T19:29:06.901824527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:06.902507 containerd[1509]: time="2025-02-13T19:29:06.901888857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:06.902507 containerd[1509]: time="2025-02-13T19:29:06.901902282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:06.902507 containerd[1509]: time="2025-02-13T19:29:06.901988905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:06.915231 systemd[1]: Started cri-containerd-d9a9484dbf5a547d205c97255e10d515eee914003c1ac3ab18136ddc26b43c24.scope - libcontainer container d9a9484dbf5a547d205c97255e10d515eee914003c1ac3ab18136ddc26b43c24. Feb 13 19:29:06.919097 systemd[1]: Started cri-containerd-87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec.scope - libcontainer container 87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec. Feb 13 19:29:06.940055 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:29:06.983785 systemd-networkd[1442]: calib48e866b9ea: Link UP Feb 13 19:29:06.984713 systemd-networkd[1442]: calib48e866b9ea: Gained carrier Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.415 [INFO][4591] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.492 [INFO][4591] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k6nvb-eth0 csi-node-driver- calico-system 5eaecbe6-c19b-4299-995d-b27991011c1a 601 0 2025-02-13 19:28:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k6nvb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib48e866b9ea [] []}} ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Namespace="calico-system" Pod="csi-node-driver-k6nvb" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6nvb-" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.493 [INFO][4591] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Namespace="calico-system" Pod="csi-node-driver-k6nvb" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6nvb-eth0" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.543 [INFO][4690] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" HandleID="k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Workload="localhost-k8s-csi--node--driver--k6nvb-eth0" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.621 [INFO][4690] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" HandleID="k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Workload="localhost-k8s-csi--node--driver--k6nvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002946b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k6nvb", "timestamp":"2025-02-13 19:29:06.543228024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.621 [INFO][4690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.850 [INFO][4690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.850 [INFO][4690] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.926 [INFO][4690] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.934 [INFO][4690] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.945 [INFO][4690] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.949 [INFO][4690] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.954 [INFO][4690] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.954 [INFO][4690] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.957 [INFO][4690] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5 Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.963 [INFO][4690] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.972 [INFO][4690] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.975 [INFO][4690] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" host="localhost" Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.975 [INFO][4690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:07.163314 containerd[1509]: 2025-02-13 19:29:06.975 [INFO][4690] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" HandleID="k8s-pod-network.34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Workload="localhost-k8s-csi--node--driver--k6nvb-eth0" Feb 13 19:29:07.203838 containerd[1509]: 2025-02-13 19:29:06.980 [INFO][4591] cni-plugin/k8s.go 386: Populated endpoint ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Namespace="calico-system" Pod="csi-node-driver-k6nvb" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6nvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k6nvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5eaecbe6-c19b-4299-995d-b27991011c1a", ResourceVersion:"601", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k6nvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib48e866b9ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:07.203838 containerd[1509]: 2025-02-13 19:29:06.980 [INFO][4591] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Namespace="calico-system" Pod="csi-node-driver-k6nvb" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6nvb-eth0" Feb 13 19:29:07.203838 containerd[1509]: 2025-02-13 19:29:06.980 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib48e866b9ea ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Namespace="calico-system" Pod="csi-node-driver-k6nvb" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6nvb-eth0" Feb 13 19:29:07.203838 containerd[1509]: 2025-02-13 19:29:06.985 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Namespace="calico-system" Pod="csi-node-driver-k6nvb" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6nvb-eth0" Feb 13 19:29:07.203838 containerd[1509]: 2025-02-13 19:29:06.985 [INFO][4591] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Namespace="calico-system" Pod="csi-node-driver-k6nvb" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6nvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k6nvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5eaecbe6-c19b-4299-995d-b27991011c1a", ResourceVersion:"601", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5", Pod:"csi-node-driver-k6nvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib48e866b9ea", MAC:"b2:2e:35:a0:93:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:07.203838 containerd[1509]: 2025-02-13 19:29:07.156 [INFO][4591] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5" Namespace="calico-system" Pod="csi-node-driver-k6nvb" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6nvb-eth0" Feb 13 19:29:07.208235 containerd[1509]: time="2025-02-13T19:29:07.207717582Z" level=info msg="StartContainer for \"d9a9484dbf5a547d205c97255e10d515eee914003c1ac3ab18136ddc26b43c24\" returns successfully" Feb 13 19:29:07.208235 containerd[1509]: time="2025-02-13T19:29:07.207847537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c445f8fb-l74z9,Uid:c94995b2-7cbe-4295-8d76-ae0d1a49f166,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec\"" Feb 13 19:29:07.220500 systemd-networkd[1442]: cali45c64fb9de7: Link UP Feb 13 19:29:07.222011 systemd-networkd[1442]: cali45c64fb9de7: Gained carrier Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:06.413 [INFO][4621] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:06.486 [INFO][4621] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0 calico-kube-controllers-5b67b658d9- calico-system 5a1448e5-fc1e-42d8-9fd7-25807931cfd4 698 0 2025-02-13 19:28:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b67b658d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b67b658d9-8gkf8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali45c64fb9de7 [] []}} ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Namespace="calico-system" Pod="calico-kube-controllers-5b67b658d9-8gkf8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:06.486 [INFO][4621] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Namespace="calico-system" Pod="calico-kube-controllers-5b67b658d9-8gkf8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:06.535 [INFO][4686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" HandleID="k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Workload="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:06.620 [INFO][4686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" HandleID="k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Workload="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c6150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b67b658d9-8gkf8", "timestamp":"2025-02-13 19:29:06.535393299 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:06.621 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:06.975 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:06.976 [INFO][4686] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.154 [INFO][4686] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.168 [INFO][4686] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.176 [INFO][4686] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.178 [INFO][4686] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.181 [INFO][4686] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.181 [INFO][4686] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.182 [INFO][4686] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4 Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.197 [INFO][4686] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.205 [INFO][4686] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.205 [INFO][4686] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" host="localhost" Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.205 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:29:07.233699 containerd[1509]: 2025-02-13 19:29:07.205 [INFO][4686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" HandleID="k8s-pod-network.906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Workload="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" Feb 13 19:29:07.234243 containerd[1509]: 2025-02-13 19:29:07.216 [INFO][4621] cni-plugin/k8s.go 386: Populated endpoint ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Namespace="calico-system" Pod="calico-kube-controllers-5b67b658d9-8gkf8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0", GenerateName:"calico-kube-controllers-5b67b658d9-", Namespace:"calico-system", SelfLink:"", UID:"5a1448e5-fc1e-42d8-9fd7-25807931cfd4", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b67b658d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b67b658d9-8gkf8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45c64fb9de7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:07.234243 containerd[1509]: 2025-02-13 19:29:07.216 [INFO][4621] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Namespace="calico-system" Pod="calico-kube-controllers-5b67b658d9-8gkf8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" Feb 13 19:29:07.234243 containerd[1509]: 2025-02-13 19:29:07.216 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45c64fb9de7 ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Namespace="calico-system" Pod="calico-kube-controllers-5b67b658d9-8gkf8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" Feb 13 19:29:07.234243 containerd[1509]: 2025-02-13 19:29:07.218 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Namespace="calico-system" Pod="calico-kube-controllers-5b67b658d9-8gkf8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" Feb 13 19:29:07.234243 containerd[1509]: 2025-02-13 19:29:07.218 [INFO][4621] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Namespace="calico-system" Pod="calico-kube-controllers-5b67b658d9-8gkf8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0", GenerateName:"calico-kube-controllers-5b67b658d9-", Namespace:"calico-system", SelfLink:"", UID:"5a1448e5-fc1e-42d8-9fd7-25807931cfd4", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b67b658d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4", Pod:"calico-kube-controllers-5b67b658d9-8gkf8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45c64fb9de7", MAC:"2a:c3:20:26:c9:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:29:07.234243 containerd[1509]: 2025-02-13 19:29:07.226 [INFO][4621] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4" Namespace="calico-system" Pod="calico-kube-controllers-5b67b658d9-8gkf8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b67b658d9--8gkf8-eth0" Feb 13 19:29:07.289792 kubelet[2633]: E0213 19:29:07.289750 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:07.299841 kubelet[2633]: E0213 19:29:07.299810 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:07.304944 kubelet[2633]: E0213 19:29:07.301708 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:07.315664 kubelet[2633]: I0213 19:29:07.311837 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lc7rd" podStartSLOduration=27.311820377 podStartE2EDuration="27.311820377s" podCreationTimestamp="2025-02-13 19:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:29:07.306628343 +0000 UTC m=+33.593648276" watchObservedRunningTime="2025-02-13 19:29:07.311820377 +0000 UTC m=+33.598840310" Feb 13 19:29:07.335026 kubelet[2633]: I0213 19:29:07.334948 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t8v5n" podStartSLOduration=27.334931034 podStartE2EDuration="27.334931034s" podCreationTimestamp="2025-02-13 19:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:29:07.334496185 +0000 UTC m=+33.621516119" watchObservedRunningTime="2025-02-13 19:29:07.334931034 +0000 UTC m=+33.621950957" Feb 13 19:29:07.341527 containerd[1509]: time="2025-02-13T19:29:07.341033919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:07.341527 containerd[1509]: time="2025-02-13T19:29:07.341093391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:07.341527 containerd[1509]: time="2025-02-13T19:29:07.341107518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:07.341527 containerd[1509]: time="2025-02-13T19:29:07.341189612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:07.360033 containerd[1509]: time="2025-02-13T19:29:07.359923660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:29:07.362125 containerd[1509]: time="2025-02-13T19:29:07.362084362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:29:07.362125 containerd[1509]: time="2025-02-13T19:29:07.362105732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:07.362216 containerd[1509]: time="2025-02-13T19:29:07.362197966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:29:07.375968 systemd[1]: Started cri-containerd-34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5.scope - libcontainer container 34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5. Feb 13 19:29:07.393119 systemd[1]: Started cri-containerd-906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4.scope - libcontainer container 906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4. Feb 13 19:29:07.413349 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:29:07.432151 containerd[1509]: time="2025-02-13T19:29:07.432106171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6nvb,Uid:5eaecbe6-c19b-4299-995d-b27991011c1a,Namespace:calico-system,Attempt:4,} returns sandbox id \"34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5\"" Feb 13 19:29:07.436321 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:29:07.466105 containerd[1509]: time="2025-02-13T19:29:07.466063263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b67b658d9-8gkf8,Uid:5a1448e5-fc1e-42d8-9fd7-25807931cfd4,Namespace:calico-system,Attempt:5,} returns sandbox id \"906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4\"" Feb 13 19:29:07.897935 systemd-networkd[1442]: cali936205a38c6: Gained IPv6LL Feb 13 19:29:08.026944 systemd-networkd[1442]: cali0c61b4834ed: Gained IPv6LL Feb 13 19:29:08.089900 systemd-networkd[1442]: cali77830174835: Gained IPv6LL Feb 13 19:29:08.281910 systemd-networkd[1442]: calib48e866b9ea: Gained IPv6LL Feb 13 19:29:08.282330 systemd-networkd[1442]: cali45c64fb9de7: Gained IPv6LL Feb 13 19:29:08.306706 kubelet[2633]: E0213 19:29:08.306680 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:08.307148 kubelet[2633]: E0213 19:29:08.306895 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:08.346251 systemd-networkd[1442]: cali1b68bfca9f3: Gained IPv6LL Feb 13 19:29:09.049042 containerd[1509]: time="2025-02-13T19:29:09.048994902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:09.049777 containerd[1509]: time="2025-02-13T19:29:09.049713674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:29:09.050903 containerd[1509]: time="2025-02-13T19:29:09.050863204Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:09.053370 containerd[1509]: time="2025-02-13T19:29:09.053328699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:09.053986 containerd[1509]: time="2025-02-13T19:29:09.053960747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.357778494s" Feb 13 19:29:09.054025 containerd[1509]: time="2025-02-13T19:29:09.053988970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:29:09.055150 containerd[1509]: time="2025-02-13T19:29:09.055110488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:29:09.056186 containerd[1509]: time="2025-02-13T19:29:09.056153479Z" level=info msg="CreateContainer within sandbox \"ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:29:09.069463 containerd[1509]: time="2025-02-13T19:29:09.069410839Z" level=info msg="CreateContainer within sandbox \"ea9fe318508d848135cff3d438a8c265a1f4bdb7962d3be4b7947ebdea512c2d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"547211eed7cd6fcb9c25884086a47f96b08900c4e02d8f1c03c6fc032b37ba2f\"" Feb 13 19:29:09.069962 containerd[1509]: time="2025-02-13T19:29:09.069930106Z" level=info msg="StartContainer for \"547211eed7cd6fcb9c25884086a47f96b08900c4e02d8f1c03c6fc032b37ba2f\"" Feb 13 19:29:09.094730 systemd[1]: run-containerd-runc-k8s.io-547211eed7cd6fcb9c25884086a47f96b08900c4e02d8f1c03c6fc032b37ba2f-runc.AFB8Ow.mount: Deactivated successfully. Feb 13 19:29:09.103907 systemd[1]: Started cri-containerd-547211eed7cd6fcb9c25884086a47f96b08900c4e02d8f1c03c6fc032b37ba2f.scope - libcontainer container 547211eed7cd6fcb9c25884086a47f96b08900c4e02d8f1c03c6fc032b37ba2f. Feb 13 19:29:09.144115 containerd[1509]: time="2025-02-13T19:29:09.144075690Z" level=info msg="StartContainer for \"547211eed7cd6fcb9c25884086a47f96b08900c4e02d8f1c03c6fc032b37ba2f\" returns successfully" Feb 13 19:29:09.311265 kubelet[2633]: E0213 19:29:09.310811 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:09.311265 kubelet[2633]: E0213 19:29:09.310892 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:09.320352 kubelet[2633]: I0213 19:29:09.320260 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c445f8fb-sdsg2" podStartSLOduration=18.961177988 podStartE2EDuration="21.32024325s" podCreationTimestamp="2025-02-13 19:28:48 +0000 UTC" firstStartedPulling="2025-02-13 19:29:06.69580778 +0000 UTC m=+32.982827703" lastFinishedPulling="2025-02-13 19:29:09.054873032 +0000 UTC m=+35.341892965" observedRunningTime="2025-02-13 19:29:09.319971388 +0000 UTC m=+35.606991321" watchObservedRunningTime="2025-02-13 19:29:09.32024325 +0000 UTC m=+35.607263183" Feb 13 19:29:09.526313 containerd[1509]: time="2025-02-13T19:29:09.526226124Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:09.528181 containerd[1509]: time="2025-02-13T19:29:09.528115435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:29:09.530232 containerd[1509]: time="2025-02-13T19:29:09.530160189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 475.011399ms" Feb 13 19:29:09.530232 containerd[1509]: time="2025-02-13T19:29:09.530217887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:29:09.532801 containerd[1509]: time="2025-02-13T19:29:09.532068165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:29:09.533170 containerd[1509]: time="2025-02-13T19:29:09.533126254Z" level=info msg="CreateContainer within sandbox \"87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:29:09.791527 containerd[1509]: time="2025-02-13T19:29:09.791480294Z" level=info msg="CreateContainer within sandbox \"87f1a83a9efc9b21b1aa328218938931ab6c89c2126a361cb27f16128fcabeec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5356d92225ee4dd5c0a770f806fd3e2efd1f8626c0550d16a2125c36e459690\"" Feb 13 19:29:09.791845 containerd[1509]: time="2025-02-13T19:29:09.791819100Z" level=info msg="StartContainer for \"d5356d92225ee4dd5c0a770f806fd3e2efd1f8626c0550d16a2125c36e459690\"" Feb 13 19:29:09.824901 systemd[1]: Started cri-containerd-d5356d92225ee4dd5c0a770f806fd3e2efd1f8626c0550d16a2125c36e459690.scope - libcontainer container d5356d92225ee4dd5c0a770f806fd3e2efd1f8626c0550d16a2125c36e459690. Feb 13 19:29:09.937604 containerd[1509]: time="2025-02-13T19:29:09.937547732Z" level=info msg="StartContainer for \"d5356d92225ee4dd5c0a770f806fd3e2efd1f8626c0550d16a2125c36e459690\" returns successfully" Feb 13 19:29:10.315703 kubelet[2633]: I0213 19:29:10.315651 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:29:10.324830 kubelet[2633]: I0213 19:29:10.324743 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c445f8fb-l74z9" podStartSLOduration=20.00447129 podStartE2EDuration="22.324731863s" podCreationTimestamp="2025-02-13 19:28:48 +0000 UTC" firstStartedPulling="2025-02-13 19:29:07.210830956 +0000 UTC m=+33.497850889" lastFinishedPulling="2025-02-13 19:29:09.531091529 +0000 UTC m=+35.818111462" observedRunningTime="2025-02-13 19:29:10.324587611 +0000 UTC m=+36.611607544" watchObservedRunningTime="2025-02-13 19:29:10.324731863 +0000 UTC m=+36.611751796" Feb 13 19:29:10.917674 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:41758.service - OpenSSH per-connection server daemon (10.0.0.1:41758). Feb 13 19:29:10.974241 sshd[5395]: Accepted publickey for core from 10.0.0.1 port 41758 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:10.976080 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:10.980401 systemd-logind[1491]: New session 11 of user core. Feb 13 19:29:10.990895 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:29:11.375514 sshd[5397]: Connection closed by 10.0.0.1 port 41758 Feb 13 19:29:11.375868 sshd-session[5395]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:11.379931 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:41758.service: Deactivated successfully. Feb 13 19:29:11.382031 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:29:11.382818 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:29:11.383782 systemd-logind[1491]: Removed session 11. Feb 13 19:29:12.333102 containerd[1509]: time="2025-02-13T19:29:12.332805022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:29:12.333102 containerd[1509]: time="2025-02-13T19:29:12.333072645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:12.334425 containerd[1509]: time="2025-02-13T19:29:12.334403606Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:12.336531 containerd[1509]: time="2025-02-13T19:29:12.336502210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:12.337102 containerd[1509]: time="2025-02-13T19:29:12.337059747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.804957959s" Feb 13 19:29:12.337102 containerd[1509]: time="2025-02-13T19:29:12.337095274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:29:12.337898 containerd[1509]: time="2025-02-13T19:29:12.337871742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:29:12.339035 containerd[1509]: time="2025-02-13T19:29:12.338994211Z" level=info msg="CreateContainer within sandbox \"34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:29:12.364282 containerd[1509]: time="2025-02-13T19:29:12.364237691Z" level=info msg="CreateContainer within sandbox \"34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"99745fad84d39b903dc4ecae26833a9811e4a1c9a2ea9d32b07cb66972123953\"" Feb 13 19:29:12.364720 containerd[1509]: time="2025-02-13T19:29:12.364689851Z" level=info msg="StartContainer for \"99745fad84d39b903dc4ecae26833a9811e4a1c9a2ea9d32b07cb66972123953\"" Feb 13 19:29:12.396910 systemd[1]: Started cri-containerd-99745fad84d39b903dc4ecae26833a9811e4a1c9a2ea9d32b07cb66972123953.scope - libcontainer container 99745fad84d39b903dc4ecae26833a9811e4a1c9a2ea9d32b07cb66972123953. Feb 13 19:29:12.427535 containerd[1509]: time="2025-02-13T19:29:12.427487653Z" level=info msg="StartContainer for \"99745fad84d39b903dc4ecae26833a9811e4a1c9a2ea9d32b07cb66972123953\" returns successfully" Feb 13 19:29:14.776055 kubelet[2633]: I0213 19:29:14.776005 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:29:14.776573 kubelet[2633]: E0213 19:29:14.776350 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:15.114007 containerd[1509]: time="2025-02-13T19:29:15.113863507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:15.115192 containerd[1509]: time="2025-02-13T19:29:15.115147469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:29:15.116884 containerd[1509]: time="2025-02-13T19:29:15.116840910Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:15.119827 containerd[1509]: time="2025-02-13T19:29:15.119774610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:15.120598 containerd[1509]: time="2025-02-13T19:29:15.120510673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.782599236s" Feb 13 19:29:15.120598 containerd[1509]: time="2025-02-13T19:29:15.120566998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:29:15.122525 containerd[1509]: time="2025-02-13T19:29:15.122176050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:29:15.131691 containerd[1509]: time="2025-02-13T19:29:15.131571827Z" level=info msg="CreateContainer within sandbox \"906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:29:15.258513 containerd[1509]: time="2025-02-13T19:29:15.258450273Z" level=info msg="CreateContainer within sandbox \"906a1da0bbe316068c89b614b83dcbadc42280ef6f915f92c37b4dc5944f76e4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0fca3ab0976b148fc40b057bf13b632d69368536fd13c16cfa3ab16ab490c2e6\"" Feb 13 19:29:15.259144 containerd[1509]: time="2025-02-13T19:29:15.258971242Z" level=info msg="StartContainer for \"0fca3ab0976b148fc40b057bf13b632d69368536fd13c16cfa3ab16ab490c2e6\"" Feb 13 19:29:15.285002 systemd[1]: Started cri-containerd-0fca3ab0976b148fc40b057bf13b632d69368536fd13c16cfa3ab16ab490c2e6.scope - libcontainer container 0fca3ab0976b148fc40b057bf13b632d69368536fd13c16cfa3ab16ab490c2e6. Feb 13 19:29:15.331107 containerd[1509]: time="2025-02-13T19:29:15.331066682Z" level=info msg="StartContainer for \"0fca3ab0976b148fc40b057bf13b632d69368536fd13c16cfa3ab16ab490c2e6\" returns successfully" Feb 13 19:29:15.335426 kubelet[2633]: E0213 19:29:15.335398 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:15.347513 kubelet[2633]: I0213 19:29:15.347424 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b67b658d9-8gkf8" podStartSLOduration=19.692751918 podStartE2EDuration="27.347403617s" podCreationTimestamp="2025-02-13 19:28:48 +0000 UTC" firstStartedPulling="2025-02-13 19:29:07.467251659 +0000 UTC m=+33.754271582" lastFinishedPulling="2025-02-13 19:29:15.121903348 +0000 UTC m=+41.408923281" observedRunningTime="2025-02-13 19:29:15.34724027 +0000 UTC m=+41.634260203" watchObservedRunningTime="2025-02-13 19:29:15.347403617 +0000 UTC m=+41.634423550" Feb 13 19:29:15.851809 kernel: bpftool[5632]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:29:16.085686 systemd-networkd[1442]: vxlan.calico: Link UP Feb 13 19:29:16.085697 systemd-networkd[1442]: vxlan.calico: Gained carrier Feb 13 19:29:16.388245 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:41768.service - OpenSSH per-connection server daemon (10.0.0.1:41768). Feb 13 19:29:16.538292 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 41768 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:16.540289 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:16.552615 systemd-logind[1491]: New session 12 of user core. Feb 13 19:29:16.558475 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:29:16.700183 sshd[5735]: Connection closed by 10.0.0.1 port 41768 Feb 13 19:29:16.700533 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:16.704945 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:41768.service: Deactivated successfully. Feb 13 19:29:16.707164 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:29:16.708046 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:29:16.709023 systemd-logind[1491]: Removed session 12. Feb 13 19:29:17.305930 systemd-networkd[1442]: vxlan.calico: Gained IPv6LL Feb 13 19:29:17.421689 containerd[1509]: time="2025-02-13T19:29:17.421597482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:17.428509 containerd[1509]: time="2025-02-13T19:29:17.428419673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:29:17.489605 containerd[1509]: time="2025-02-13T19:29:17.489547671Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:17.544922 containerd[1509]: time="2025-02-13T19:29:17.544835933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:17.545677 containerd[1509]: time="2025-02-13T19:29:17.545626127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.423412856s" Feb 13 19:29:17.545677 containerd[1509]: time="2025-02-13T19:29:17.545670961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:29:17.548133 containerd[1509]: time="2025-02-13T19:29:17.548092027Z" level=info msg="CreateContainer within sandbox \"34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:29:17.654928 containerd[1509]: time="2025-02-13T19:29:17.654788374Z" level=info msg="CreateContainer within sandbox \"34eba5682c834eccdc331d4637bdde37d26b36504aeed1b65f1e01d922d94ea5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e23b992bce679d60b3f2cb1f18489c38a72984f23d5ff10ccf36eb05813041cd\"" Feb 13 19:29:17.655942 containerd[1509]: time="2025-02-13T19:29:17.655597293Z" level=info msg="StartContainer for \"e23b992bce679d60b3f2cb1f18489c38a72984f23d5ff10ccf36eb05813041cd\"" Feb 13 19:29:17.694012 systemd[1]: Started cri-containerd-e23b992bce679d60b3f2cb1f18489c38a72984f23d5ff10ccf36eb05813041cd.scope - libcontainer container e23b992bce679d60b3f2cb1f18489c38a72984f23d5ff10ccf36eb05813041cd. Feb 13 19:29:17.728428 containerd[1509]: time="2025-02-13T19:29:17.728379617Z" level=info msg="StartContainer for \"e23b992bce679d60b3f2cb1f18489c38a72984f23d5ff10ccf36eb05813041cd\" returns successfully" Feb 13 19:29:17.840162 kubelet[2633]: I0213 19:29:17.840101 2633 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:29:17.840162 kubelet[2633]: I0213 19:29:17.840156 2633 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:29:18.354583 kubelet[2633]: I0213 19:29:18.354365 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k6nvb" podStartSLOduration=20.243254345 podStartE2EDuration="30.35435046s" podCreationTimestamp="2025-02-13 19:28:48 +0000 UTC" firstStartedPulling="2025-02-13 19:29:07.435449628 +0000 UTC m=+33.722469561" lastFinishedPulling="2025-02-13 19:29:17.546545743 +0000 UTC m=+43.833565676" observedRunningTime="2025-02-13 19:29:18.353805507 +0000 UTC m=+44.640825440" watchObservedRunningTime="2025-02-13 19:29:18.35435046 +0000 UTC m=+44.641370394" Feb 13 19:29:21.713705 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:55918.service - OpenSSH per-connection server daemon (10.0.0.1:55918). Feb 13 19:29:21.774299 sshd[5798]: Accepted publickey for core from 10.0.0.1 port 55918 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:21.775987 sshd-session[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:21.780646 systemd-logind[1491]: New session 13 of user core. Feb 13 19:29:21.787909 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:29:21.916292 sshd[5800]: Connection closed by 10.0.0.1 port 55918 Feb 13 19:29:21.916824 sshd-session[5798]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:21.927030 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:55918.service: Deactivated successfully. Feb 13 19:29:21.929150 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:29:21.930949 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:29:21.938062 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:55926.service - OpenSSH per-connection server daemon (10.0.0.1:55926). Feb 13 19:29:21.939115 systemd-logind[1491]: Removed session 13. Feb 13 19:29:21.977174 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 55926 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:21.978904 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:21.983553 systemd-logind[1491]: New session 14 of user core. Feb 13 19:29:21.992918 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:29:22.158513 sshd[5816]: Connection closed by 10.0.0.1 port 55926 Feb 13 19:29:22.158953 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:22.174959 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:55926.service: Deactivated successfully. Feb 13 19:29:22.179705 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:29:22.181966 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:29:22.191170 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:55936.service - OpenSSH per-connection server daemon (10.0.0.1:55936). Feb 13 19:29:22.193980 systemd-logind[1491]: Removed session 14. Feb 13 19:29:22.243680 sshd[5826]: Accepted publickey for core from 10.0.0.1 port 55936 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:22.244980 sshd-session[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:22.249182 systemd-logind[1491]: New session 15 of user core. Feb 13 19:29:22.256878 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:29:22.375935 sshd[5829]: Connection closed by 10.0.0.1 port 55936 Feb 13 19:29:22.376281 sshd-session[5826]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:22.380641 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:55936.service: Deactivated successfully. Feb 13 19:29:22.382848 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:29:22.383670 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:29:22.384643 systemd-logind[1491]: Removed session 15. Feb 13 19:29:27.390691 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:55948.service - OpenSSH per-connection server daemon (10.0.0.1:55948). Feb 13 19:29:27.433549 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 55948 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:27.435174 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:27.439126 systemd-logind[1491]: New session 16 of user core. Feb 13 19:29:27.448892 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:29:27.558543 sshd[5859]: Connection closed by 10.0.0.1 port 55948 Feb 13 19:29:27.558910 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:27.562742 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:55948.service: Deactivated successfully. Feb 13 19:29:27.564980 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:29:27.565663 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:29:27.566499 systemd-logind[1491]: Removed session 16. Feb 13 19:29:32.571897 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:54700.service - OpenSSH per-connection server daemon (10.0.0.1:54700). Feb 13 19:29:32.610899 sshd[5872]: Accepted publickey for core from 10.0.0.1 port 54700 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:32.612292 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:32.616613 systemd-logind[1491]: New session 17 of user core. Feb 13 19:29:32.624888 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:29:32.742788 sshd[5874]: Connection closed by 10.0.0.1 port 54700 Feb 13 19:29:32.743243 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:32.751656 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:54700.service: Deactivated successfully. Feb 13 19:29:32.753690 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:29:32.755273 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:29:32.765454 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:54704.service - OpenSSH per-connection server daemon (10.0.0.1:54704). Feb 13 19:29:32.766504 systemd-logind[1491]: Removed session 17. Feb 13 19:29:32.804170 sshd[5887]: Accepted publickey for core from 10.0.0.1 port 54704 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:32.805742 sshd-session[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:32.810352 systemd-logind[1491]: New session 18 of user core. Feb 13 19:29:32.816883 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:29:33.007285 sshd[5890]: Connection closed by 10.0.0.1 port 54704 Feb 13 19:29:33.007492 sshd-session[5887]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:33.017801 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:54704.service: Deactivated successfully. Feb 13 19:29:33.020001 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:29:33.021484 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:29:33.028995 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:54708.service - OpenSSH per-connection server daemon (10.0.0.1:54708). Feb 13 19:29:33.029939 systemd-logind[1491]: Removed session 18. Feb 13 19:29:33.071236 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 54708 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:33.073155 sshd-session[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:33.077736 systemd-logind[1491]: New session 19 of user core. Feb 13 19:29:33.082904 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:29:33.770978 containerd[1509]: time="2025-02-13T19:29:33.770926442Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" Feb 13 19:29:33.771574 containerd[1509]: time="2025-02-13T19:29:33.771097092Z" level=info msg="TearDown network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" successfully" Feb 13 19:29:33.771574 containerd[1509]: time="2025-02-13T19:29:33.771148529Z" level=info msg="StopPodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" returns successfully" Feb 13 19:29:33.777933 containerd[1509]: time="2025-02-13T19:29:33.777884425Z" level=info msg="RemovePodSandbox for \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" Feb 13 19:29:33.792194 containerd[1509]: time="2025-02-13T19:29:33.792151049Z" level=info msg="Forcibly stopping sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\"" Feb 13 19:29:33.792341 containerd[1509]: time="2025-02-13T19:29:33.792289770Z" level=info msg="TearDown network for sandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" successfully" Feb 13 19:29:33.976023 containerd[1509]: time="2025-02-13T19:29:33.975411928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:33.978970 containerd[1509]: time="2025-02-13T19:29:33.978932102Z" level=info msg="RemovePodSandbox \"867a2e912ba1735bbb556bcc898fc4c460e462102743641bad1357797243b289\" returns successfully" Feb 13 19:29:33.979696 containerd[1509]: time="2025-02-13T19:29:33.979665308Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\"" Feb 13 19:29:33.979816 containerd[1509]: time="2025-02-13T19:29:33.979795873Z" level=info msg="TearDown network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" successfully" Feb 13 19:29:33.979816 containerd[1509]: time="2025-02-13T19:29:33.979810040Z" level=info msg="StopPodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" returns successfully" Feb 13 19:29:33.980687 containerd[1509]: time="2025-02-13T19:29:33.980083051Z" level=info msg="RemovePodSandbox for \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\"" Feb 13 19:29:33.980687 containerd[1509]: time="2025-02-13T19:29:33.980106576Z" level=info msg="Forcibly stopping sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\"" Feb 13 19:29:33.980687 containerd[1509]: time="2025-02-13T19:29:33.980192887Z" level=info msg="TearDown network for sandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" successfully" Feb 13 19:29:33.998263 containerd[1509]: time="2025-02-13T19:29:33.998218053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:33.998408 containerd[1509]: time="2025-02-13T19:29:33.998281562Z" level=info msg="RemovePodSandbox \"aa39e122fbf0cdf6a62275bf934b1987e37b14d6e234c0efa9d4896c365b700c\" returns successfully" Feb 13 19:29:33.998867 containerd[1509]: time="2025-02-13T19:29:33.998680360Z" level=info msg="StopPodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\"" Feb 13 19:29:33.998867 containerd[1509]: time="2025-02-13T19:29:33.998796278Z" level=info msg="TearDown network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" successfully" Feb 13 19:29:33.998867 containerd[1509]: time="2025-02-13T19:29:33.998806006Z" level=info msg="StopPodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" returns successfully" Feb 13 19:29:33.999097 containerd[1509]: time="2025-02-13T19:29:33.999076453Z" level=info msg="RemovePodSandbox for \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\"" Feb 13 19:29:33.999097 containerd[1509]: time="2025-02-13T19:29:33.999095549Z" level=info msg="Forcibly stopping sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\"" Feb 13 19:29:33.999193 containerd[1509]: time="2025-02-13T19:29:33.999159358Z" level=info msg="TearDown network for sandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" successfully" Feb 13 19:29:34.037963 containerd[1509]: time="2025-02-13T19:29:34.037930742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.038067 containerd[1509]: time="2025-02-13T19:29:34.037985134Z" level=info msg="RemovePodSandbox \"374c71d3639bfaff04ee9987c5e5f21a401a07472b6e52b0e960770ddcd5948e\" returns successfully" Feb 13 19:29:34.038431 containerd[1509]: time="2025-02-13T19:29:34.038398590Z" level=info msg="StopPodSandbox for \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\"" Feb 13 19:29:34.038530 containerd[1509]: time="2025-02-13T19:29:34.038504408Z" level=info msg="TearDown network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\" successfully" Feb 13 19:29:34.038530 containerd[1509]: time="2025-02-13T19:29:34.038522262Z" level=info msg="StopPodSandbox for \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\" returns successfully" Feb 13 19:29:34.038752 containerd[1509]: time="2025-02-13T19:29:34.038725002Z" level=info msg="RemovePodSandbox for \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\"" Feb 13 19:29:34.038752 containerd[1509]: time="2025-02-13T19:29:34.038745090Z" level=info msg="Forcibly stopping sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\"" Feb 13 19:29:34.038856 containerd[1509]: time="2025-02-13T19:29:34.038822334Z" level=info msg="TearDown network for sandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\" successfully" Feb 13 19:29:34.058921 containerd[1509]: time="2025-02-13T19:29:34.058867809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.058921 containerd[1509]: time="2025-02-13T19:29:34.058948099Z" level=info msg="RemovePodSandbox \"5906a9ec2e2e766868f1435a0d90b5d2c78f6b44cba83c53466e8dabbc71c14c\" returns successfully" Feb 13 19:29:34.059545 containerd[1509]: time="2025-02-13T19:29:34.059476842Z" level=info msg="StopPodSandbox for \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\"" Feb 13 19:29:34.059602 containerd[1509]: time="2025-02-13T19:29:34.059585846Z" level=info msg="TearDown network for sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\" successfully" Feb 13 19:29:34.059602 containerd[1509]: time="2025-02-13T19:29:34.059599632Z" level=info msg="StopPodSandbox for \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\" returns successfully" Feb 13 19:29:34.060144 containerd[1509]: time="2025-02-13T19:29:34.059888904Z" level=info msg="RemovePodSandbox for \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\"" Feb 13 19:29:34.060144 containerd[1509]: time="2025-02-13T19:29:34.059925373Z" level=info msg="Forcibly stopping sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\"" Feb 13 19:29:34.060144 containerd[1509]: time="2025-02-13T19:29:34.060016133Z" level=info msg="TearDown network for sandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\" successfully" Feb 13 19:29:34.075237 containerd[1509]: time="2025-02-13T19:29:34.075157817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.075237 containerd[1509]: time="2025-02-13T19:29:34.075229612Z" level=info msg="RemovePodSandbox \"90c9e308dc84b9c8db8b5ed7d042355c7a5a8a9768f14bc5cc75943376a4d0a1\" returns successfully" Feb 13 19:29:34.075878 containerd[1509]: time="2025-02-13T19:29:34.075685728Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" Feb 13 19:29:34.075878 containerd[1509]: time="2025-02-13T19:29:34.075847882Z" level=info msg="TearDown network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" successfully" Feb 13 19:29:34.075935 containerd[1509]: time="2025-02-13T19:29:34.075886885Z" level=info msg="StopPodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" returns successfully" Feb 13 19:29:34.076337 containerd[1509]: time="2025-02-13T19:29:34.076254495Z" level=info msg="RemovePodSandbox for \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" Feb 13 19:29:34.076337 containerd[1509]: time="2025-02-13T19:29:34.076331930Z" level=info msg="Forcibly stopping sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\"" Feb 13 19:29:34.076455 containerd[1509]: time="2025-02-13T19:29:34.076407632Z" level=info msg="TearDown network for sandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" successfully" Feb 13 19:29:34.081000 containerd[1509]: time="2025-02-13T19:29:34.080943071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.081124 containerd[1509]: time="2025-02-13T19:29:34.081021317Z" level=info msg="RemovePodSandbox \"d32b0694bb529783fce18e28a44a6900827d5e5cdca59ef94ab395740291d754\" returns successfully" Feb 13 19:29:34.081580 containerd[1509]: time="2025-02-13T19:29:34.081522387Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\"" Feb 13 19:29:34.081654 sshd[5904]: Connection closed by 10.0.0.1 port 54708 Feb 13 19:29:34.082094 containerd[1509]: time="2025-02-13T19:29:34.081655767Z" level=info msg="TearDown network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" successfully" Feb 13 19:29:34.082094 containerd[1509]: time="2025-02-13T19:29:34.081668070Z" level=info msg="StopPodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" returns successfully" Feb 13 19:29:34.082403 containerd[1509]: time="2025-02-13T19:29:34.082371690Z" level=info msg="RemovePodSandbox for \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\"" Feb 13 19:29:34.082605 containerd[1509]: time="2025-02-13T19:29:34.082475665Z" level=info msg="Forcibly stopping sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\"" Feb 13 19:29:34.082605 containerd[1509]: time="2025-02-13T19:29:34.082563680Z" level=info msg="TearDown network for sandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" successfully" Feb 13 19:29:34.083830 sshd-session[5901]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:34.089036 containerd[1509]: time="2025-02-13T19:29:34.088625993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.089036 containerd[1509]: time="2025-02-13T19:29:34.088801152Z" level=info msg="RemovePodSandbox \"720944bef01d08464383b89d4f1667c30507331d07cbbbd118d0f040f985f0ba\" returns successfully" Feb 13 19:29:34.091012 containerd[1509]: time="2025-02-13T19:29:34.090031039Z" level=info msg="StopPodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\"" Feb 13 19:29:34.091012 containerd[1509]: time="2025-02-13T19:29:34.090159651Z" level=info msg="TearDown network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" successfully" Feb 13 19:29:34.091012 containerd[1509]: time="2025-02-13T19:29:34.090171573Z" level=info msg="StopPodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" returns successfully" Feb 13 19:29:34.092091 containerd[1509]: time="2025-02-13T19:29:34.091868706Z" level=info msg="RemovePodSandbox for \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\"" Feb 13 19:29:34.092091 containerd[1509]: time="2025-02-13T19:29:34.091903512Z" level=info msg="Forcibly stopping sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\"" Feb 13 19:29:34.092091 containerd[1509]: time="2025-02-13T19:29:34.091969746Z" level=info msg="TearDown network for sandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" successfully" Feb 13 19:29:34.100001 containerd[1509]: time="2025-02-13T19:29:34.099858175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.100174 containerd[1509]: time="2025-02-13T19:29:34.100039905Z" level=info msg="RemovePodSandbox \"4c323645363e7abdb089e3505881795c0cf526f6723ec93b4d40b9ad63755cbf\" returns successfully" Feb 13 19:29:34.102108 containerd[1509]: time="2025-02-13T19:29:34.100896262Z" level=info msg="StopPodSandbox for \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\"" Feb 13 19:29:34.102108 containerd[1509]: time="2025-02-13T19:29:34.101099784Z" level=info msg="TearDown network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\" successfully" Feb 13 19:29:34.102108 containerd[1509]: time="2025-02-13T19:29:34.101114471Z" level=info msg="StopPodSandbox for \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\" returns successfully" Feb 13 19:29:34.102108 containerd[1509]: time="2025-02-13T19:29:34.101455911Z" level=info msg="RemovePodSandbox for \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\"" Feb 13 19:29:34.102108 containerd[1509]: time="2025-02-13T19:29:34.101473665Z" level=info msg="Forcibly stopping sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\"" Feb 13 19:29:34.102108 containerd[1509]: time="2025-02-13T19:29:34.101532194Z" level=info msg="TearDown network for sandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\" successfully" Feb 13 19:29:34.101202 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:54722.service - OpenSSH per-connection server daemon (10.0.0.1:54722). Feb 13 19:29:34.101978 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:54708.service: Deactivated successfully. Feb 13 19:29:34.105828 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:29:34.106818 containerd[1509]: time="2025-02-13T19:29:34.106739243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.107074 containerd[1509]: time="2025-02-13T19:29:34.106833569Z" level=info msg="RemovePodSandbox \"794329b590d0e6b85d4bfac95edd1a9366daa49f947206af06f72f93c8ad0833\" returns successfully" Feb 13 19:29:34.108787 containerd[1509]: time="2025-02-13T19:29:34.108726461Z" level=info msg="StopPodSandbox for \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\"" Feb 13 19:29:34.111587 containerd[1509]: time="2025-02-13T19:29:34.108888344Z" level=info msg="TearDown network for sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\" successfully" Feb 13 19:29:34.111587 containerd[1509]: time="2025-02-13T19:29:34.108907941Z" level=info msg="StopPodSandbox for \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\" returns successfully" Feb 13 19:29:34.111587 containerd[1509]: time="2025-02-13T19:29:34.110799259Z" level=info msg="RemovePodSandbox for \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\"" Feb 13 19:29:34.111587 containerd[1509]: time="2025-02-13T19:29:34.110907332Z" level=info msg="Forcibly stopping sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\"" Feb 13 19:29:34.111587 containerd[1509]: time="2025-02-13T19:29:34.110982022Z" level=info msg="TearDown network for sandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\" successfully" Feb 13 19:29:34.110039 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:29:34.114557 systemd-logind[1491]: Removed session 19. Feb 13 19:29:34.118331 containerd[1509]: time="2025-02-13T19:29:34.116682767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.118331 containerd[1509]: time="2025-02-13T19:29:34.117000332Z" level=info msg="RemovePodSandbox \"c5b01ad919981cd82445b883b1b0f1dff3e18db62405bf74c9e4c870c5592c0b\" returns successfully" Feb 13 19:29:34.118634 containerd[1509]: time="2025-02-13T19:29:34.118319326Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" Feb 13 19:29:34.118729 containerd[1509]: time="2025-02-13T19:29:34.118700893Z" level=info msg="TearDown network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" successfully" Feb 13 19:29:34.118729 containerd[1509]: time="2025-02-13T19:29:34.118721621Z" level=info msg="StopPodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" returns successfully" Feb 13 19:29:34.119067 containerd[1509]: time="2025-02-13T19:29:34.118970538Z" level=info msg="RemovePodSandbox for \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" Feb 13 19:29:34.119067 containerd[1509]: time="2025-02-13T19:29:34.118990025Z" level=info msg="Forcibly stopping sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\"" Feb 13 19:29:34.119161 containerd[1509]: time="2025-02-13T19:29:34.119061649Z" level=info msg="TearDown network for sandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" successfully" Feb 13 19:29:34.123285 containerd[1509]: time="2025-02-13T19:29:34.123244526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.123285 containerd[1509]: time="2025-02-13T19:29:34.123280694Z" level=info msg="RemovePodSandbox \"3946c046c51c9c5cc4180686e3a04d64f682b5ddca6aaacdf905d4667b877a80\" returns successfully" Feb 13 19:29:34.123621 containerd[1509]: time="2025-02-13T19:29:34.123586247Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\"" Feb 13 19:29:34.123746 containerd[1509]: time="2025-02-13T19:29:34.123723103Z" level=info msg="TearDown network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" successfully" Feb 13 19:29:34.123746 containerd[1509]: time="2025-02-13T19:29:34.123737099Z" level=info msg="StopPodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" returns successfully" Feb 13 19:29:34.124003 containerd[1509]: time="2025-02-13T19:29:34.123980797Z" level=info msg="RemovePodSandbox for \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\"" Feb 13 19:29:34.124003 containerd[1509]: time="2025-02-13T19:29:34.124004071Z" level=info msg="Forcibly stopping sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\"" Feb 13 19:29:34.124119 containerd[1509]: time="2025-02-13T19:29:34.124078811Z" level=info msg="TearDown network for sandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" successfully" Feb 13 19:29:34.127748 containerd[1509]: time="2025-02-13T19:29:34.127712798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.127748 containerd[1509]: time="2025-02-13T19:29:34.127744828Z" level=info msg="RemovePodSandbox \"5f1779f931f6e6627d37762c51b374d70ef72e182fb02edfdc27bc1eeb786e3a\" returns successfully" Feb 13 19:29:34.128065 containerd[1509]: time="2025-02-13T19:29:34.128029091Z" level=info msg="StopPodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\"" Feb 13 19:29:34.128159 containerd[1509]: time="2025-02-13T19:29:34.128126714Z" level=info msg="TearDown network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" successfully" Feb 13 19:29:34.128159 containerd[1509]: time="2025-02-13T19:29:34.128140229Z" level=info msg="StopPodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" returns successfully" Feb 13 19:29:34.128525 containerd[1509]: time="2025-02-13T19:29:34.128498582Z" level=info msg="RemovePodSandbox for \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\"" Feb 13 19:29:34.128874 containerd[1509]: time="2025-02-13T19:29:34.128646179Z" level=info msg="Forcibly stopping sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\"" Feb 13 19:29:34.128874 containerd[1509]: time="2025-02-13T19:29:34.128755965Z" level=info msg="TearDown network for sandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" successfully" Feb 13 19:29:34.133108 containerd[1509]: time="2025-02-13T19:29:34.133082732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.133227 containerd[1509]: time="2025-02-13T19:29:34.133209559Z" level=info msg="RemovePodSandbox \"e4bf33fdd4969855e7bbb6316ed7cb21f5954abb048fc3de824262a5fcf5a3e3\" returns successfully" Feb 13 19:29:34.133546 containerd[1509]: time="2025-02-13T19:29:34.133521435Z" level=info msg="StopPodSandbox for \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\"" Feb 13 19:29:34.133840 containerd[1509]: time="2025-02-13T19:29:34.133734194Z" level=info msg="TearDown network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\" successfully" Feb 13 19:29:34.133840 containerd[1509]: time="2025-02-13T19:29:34.133748831Z" level=info msg="StopPodSandbox for \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\" returns successfully" Feb 13 19:29:34.134084 containerd[1509]: time="2025-02-13T19:29:34.134044947Z" level=info msg="RemovePodSandbox for \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\"" Feb 13 19:29:34.134121 containerd[1509]: time="2025-02-13T19:29:34.134087126Z" level=info msg="Forcibly stopping sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\"" Feb 13 19:29:34.134209 containerd[1509]: time="2025-02-13T19:29:34.134172977Z" level=info msg="TearDown network for sandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\" successfully" Feb 13 19:29:34.137815 containerd[1509]: time="2025-02-13T19:29:34.137789281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.137873 containerd[1509]: time="2025-02-13T19:29:34.137828675Z" level=info msg="RemovePodSandbox \"e61c8386b7f9b3408b8f416916e70954ac514303f4063208f8e592c7dcf03d49\" returns successfully" Feb 13 19:29:34.138117 containerd[1509]: time="2025-02-13T19:29:34.138097990Z" level=info msg="StopPodSandbox for \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\"" Feb 13 19:29:34.138254 containerd[1509]: time="2025-02-13T19:29:34.138174443Z" level=info msg="TearDown network for sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\" successfully" Feb 13 19:29:34.138254 containerd[1509]: time="2025-02-13T19:29:34.138213487Z" level=info msg="StopPodSandbox for \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\" returns successfully" Feb 13 19:29:34.138706 containerd[1509]: time="2025-02-13T19:29:34.138683939Z" level=info msg="RemovePodSandbox for \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\"" Feb 13 19:29:34.138706 containerd[1509]: time="2025-02-13T19:29:34.138701873Z" level=info msg="Forcibly stopping sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\"" Feb 13 19:29:34.138913 containerd[1509]: time="2025-02-13T19:29:34.138787023Z" level=info msg="TearDown network for sandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\" successfully" Feb 13 19:29:34.142339 containerd[1509]: time="2025-02-13T19:29:34.142303690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.142339 containerd[1509]: time="2025-02-13T19:29:34.142334077Z" level=info msg="RemovePodSandbox \"dbee5e4c043aeecb213900cf0b6f5de335fb4c65b2fd393ef3663d3b41ca0af7\" returns successfully" Feb 13 19:29:34.142554 containerd[1509]: time="2025-02-13T19:29:34.142538891Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" Feb 13 19:29:34.142628 containerd[1509]: time="2025-02-13T19:29:34.142611096Z" level=info msg="TearDown network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" successfully" Feb 13 19:29:34.142628 containerd[1509]: time="2025-02-13T19:29:34.142625363Z" level=info msg="StopPodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" returns successfully" Feb 13 19:29:34.143544 containerd[1509]: time="2025-02-13T19:29:34.143507267Z" level=info msg="RemovePodSandbox for \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" Feb 13 19:29:34.143544 containerd[1509]: time="2025-02-13T19:29:34.143533136Z" level=info msg="Forcibly stopping sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\"" Feb 13 19:29:34.143653 containerd[1509]: time="2025-02-13T19:29:34.143607335Z" level=info msg="TearDown network for sandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" successfully" Feb 13 19:29:34.147286 containerd[1509]: time="2025-02-13T19:29:34.147257623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.147337 containerd[1509]: time="2025-02-13T19:29:34.147292930Z" level=info msg="RemovePodSandbox \"10d48e651930b292ad3bc398119dc6f6e579ef4e58bae9fcdaf8c5e8e1ca34c0\" returns successfully" Feb 13 19:29:34.147558 containerd[1509]: time="2025-02-13T19:29:34.147539632Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\"" Feb 13 19:29:34.147632 containerd[1509]: time="2025-02-13T19:29:34.147618680Z" level=info msg="TearDown network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" successfully" Feb 13 19:29:34.147654 containerd[1509]: time="2025-02-13T19:29:34.147630563Z" level=info msg="StopPodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" returns successfully" Feb 13 19:29:34.147958 containerd[1509]: time="2025-02-13T19:29:34.147935134Z" level=info msg="RemovePodSandbox for \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\"" Feb 13 19:29:34.147997 containerd[1509]: time="2025-02-13T19:29:34.147962856Z" level=info msg="Forcibly stopping sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\"" Feb 13 19:29:34.148075 containerd[1509]: time="2025-02-13T19:29:34.148059106Z" level=info msg="TearDown network for sandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" successfully" Feb 13 19:29:34.151842 containerd[1509]: time="2025-02-13T19:29:34.151814161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.151886 containerd[1509]: time="2025-02-13T19:29:34.151846441Z" level=info msg="RemovePodSandbox \"796129f133829bf5e2901288ff8f9671877733576c3336c0118ab3c499f512cc\" returns successfully" Feb 13 19:29:34.152019 sshd[5921]: Accepted publickey for core from 10.0.0.1 port 54722 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:34.152283 containerd[1509]: time="2025-02-13T19:29:34.152071634Z" level=info msg="StopPodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\"" Feb 13 19:29:34.152283 containerd[1509]: time="2025-02-13T19:29:34.152145873Z" level=info msg="TearDown network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" successfully" Feb 13 19:29:34.152283 containerd[1509]: time="2025-02-13T19:29:34.152154920Z" level=info msg="StopPodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" returns successfully" Feb 13 19:29:34.152351 containerd[1509]: time="2025-02-13T19:29:34.152329477Z" level=info msg="RemovePodSandbox for \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\"" Feb 13 19:29:34.152351 containerd[1509]: time="2025-02-13T19:29:34.152345798Z" level=info msg="Forcibly stopping sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\"" Feb 13 19:29:34.152448 containerd[1509]: time="2025-02-13T19:29:34.152407012Z" level=info msg="TearDown network for sandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" successfully" Feb 13 19:29:34.154029 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:34.156023 containerd[1509]: time="2025-02-13T19:29:34.156000625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.156115 containerd[1509]: time="2025-02-13T19:29:34.156037083Z" level=info msg="RemovePodSandbox \"cf66925f03ce7925bbe313acc2964bb30684e6fb57db9c045148763933cce627\" returns successfully" Feb 13 19:29:34.156332 containerd[1509]: time="2025-02-13T19:29:34.156306298Z" level=info msg="StopPodSandbox for \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\"" Feb 13 19:29:34.156413 containerd[1509]: time="2025-02-13T19:29:34.156395355Z" level=info msg="TearDown network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\" successfully" Feb 13 19:29:34.156413 containerd[1509]: time="2025-02-13T19:29:34.156409080Z" level=info msg="StopPodSandbox for \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\" returns successfully" Feb 13 19:29:34.156629 containerd[1509]: time="2025-02-13T19:29:34.156609917Z" level=info msg="RemovePodSandbox for \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\"" Feb 13 19:29:34.156664 containerd[1509]: time="2025-02-13T19:29:34.156629924Z" level=info msg="Forcibly stopping sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\"" Feb 13 19:29:34.156728 containerd[1509]: time="2025-02-13T19:29:34.156693845Z" level=info msg="TearDown network for sandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\" successfully" Feb 13 19:29:34.158968 systemd-logind[1491]: New session 20 of user core. Feb 13 19:29:34.160510 containerd[1509]: time="2025-02-13T19:29:34.160464478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.160510 containerd[1509]: time="2025-02-13T19:29:34.160508992Z" level=info msg="RemovePodSandbox \"9a58f91200163ca30a073960d48ce935da629f86cf85e816435ee9bad81e623c\" returns successfully" Feb 13 19:29:34.160736 containerd[1509]: time="2025-02-13T19:29:34.160704999Z" level=info msg="StopPodSandbox for \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\"" Feb 13 19:29:34.160838 containerd[1509]: time="2025-02-13T19:29:34.160818582Z" level=info msg="TearDown network for sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\" successfully" Feb 13 19:29:34.160838 containerd[1509]: time="2025-02-13T19:29:34.160831316Z" level=info msg="StopPodSandbox for \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\" returns successfully" Feb 13 19:29:34.161090 containerd[1509]: time="2025-02-13T19:29:34.161043415Z" level=info msg="RemovePodSandbox for \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\"" Feb 13 19:29:34.161122 containerd[1509]: time="2025-02-13T19:29:34.161091856Z" level=info msg="Forcibly stopping sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\"" Feb 13 19:29:34.161175 containerd[1509]: time="2025-02-13T19:29:34.161150916Z" level=info msg="TearDown network for sandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\" successfully" Feb 13 19:29:34.164787 containerd[1509]: time="2025-02-13T19:29:34.164746942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.164828 containerd[1509]: time="2025-02-13T19:29:34.164797196Z" level=info msg="RemovePodSandbox \"2f94986f9401a07e2c82abf08692bdc96f9e73b7243512571a60065feefc553a\" returns successfully" Feb 13 19:29:34.165042 containerd[1509]: time="2025-02-13T19:29:34.165024502Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" Feb 13 19:29:34.165125 containerd[1509]: time="2025-02-13T19:29:34.165108390Z" level=info msg="TearDown network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" successfully" Feb 13 19:29:34.165166 containerd[1509]: time="2025-02-13T19:29:34.165122326Z" level=info msg="StopPodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" returns successfully" Feb 13 19:29:34.165369 containerd[1509]: time="2025-02-13T19:29:34.165330537Z" level=info msg="RemovePodSandbox for \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" Feb 13 19:29:34.165369 containerd[1509]: time="2025-02-13T19:29:34.165355895Z" level=info msg="Forcibly stopping sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\"" Feb 13 19:29:34.165471 containerd[1509]: time="2025-02-13T19:29:34.165429723Z" level=info msg="TearDown network for sandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" successfully" Feb 13 19:29:34.169309 containerd[1509]: time="2025-02-13T19:29:34.169276559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.169366 containerd[1509]: time="2025-02-13T19:29:34.169317205Z" level=info msg="RemovePodSandbox \"a18f5f310c3a41e35b8548b037dc0b306e0e86e7fee583e64d4aa142b0b7a86f\" returns successfully" Feb 13 19:29:34.169604 containerd[1509]: time="2025-02-13T19:29:34.169585098Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\"" Feb 13 19:29:34.169678 containerd[1509]: time="2025-02-13T19:29:34.169665730Z" level=info msg="TearDown network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" successfully" Feb 13 19:29:34.169703 containerd[1509]: time="2025-02-13T19:29:34.169677281Z" level=info msg="StopPodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" returns successfully" Feb 13 19:29:34.170282 containerd[1509]: time="2025-02-13T19:29:34.169900991Z" level=info msg="RemovePodSandbox for \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\"" Feb 13 19:29:34.170282 containerd[1509]: time="2025-02-13T19:29:34.169920257Z" level=info msg="Forcibly stopping sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\"" Feb 13 19:29:34.170282 containerd[1509]: time="2025-02-13T19:29:34.169980420Z" level=info msg="TearDown network for sandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" successfully" Feb 13 19:29:34.169936 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:29:34.173742 containerd[1509]: time="2025-02-13T19:29:34.173714836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.173813 containerd[1509]: time="2025-02-13T19:29:34.173755502Z" level=info msg="RemovePodSandbox \"851338a550e99f2c7458bfe1d7eb9d8099e78e8813c156cf73c06959c11f4f08\" returns successfully" Feb 13 19:29:34.174081 containerd[1509]: time="2025-02-13T19:29:34.174040917Z" level=info msg="StopPodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\"" Feb 13 19:29:34.174174 containerd[1509]: time="2025-02-13T19:29:34.174151605Z" level=info msg="TearDown network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" successfully" Feb 13 19:29:34.174174 containerd[1509]: time="2025-02-13T19:29:34.174166192Z" level=info msg="StopPodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" returns successfully" Feb 13 19:29:34.174461 containerd[1509]: time="2025-02-13T19:29:34.174442049Z" level=info msg="RemovePodSandbox for \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\"" Feb 13 19:29:34.174512 containerd[1509]: time="2025-02-13T19:29:34.174464171Z" level=info msg="Forcibly stopping sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\"" Feb 13 19:29:34.174639 containerd[1509]: time="2025-02-13T19:29:34.174548990Z" level=info msg="TearDown network for sandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" successfully" Feb 13 19:29:34.178921 containerd[1509]: time="2025-02-13T19:29:34.178886537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.178985 containerd[1509]: time="2025-02-13T19:29:34.178959544Z" level=info msg="RemovePodSandbox \"e44dcb72dd495ea8b0f36f07ee623827bdcc419a379c7021883888d884c80a75\" returns successfully" Feb 13 19:29:34.179507 containerd[1509]: time="2025-02-13T19:29:34.179479319Z" level=info msg="StopPodSandbox for \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\"" Feb 13 19:29:34.179620 containerd[1509]: time="2025-02-13T19:29:34.179604023Z" level=info msg="TearDown network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\" successfully" Feb 13 19:29:34.179658 containerd[1509]: time="2025-02-13T19:29:34.179618811Z" level=info msg="StopPodSandbox for \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\" returns successfully" Feb 13 19:29:34.179862 containerd[1509]: time="2025-02-13T19:29:34.179838563Z" level=info msg="RemovePodSandbox for \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\"" Feb 13 19:29:34.179862 containerd[1509]: time="2025-02-13T19:29:34.179859832Z" level=info msg="Forcibly stopping sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\"" Feb 13 19:29:34.179966 containerd[1509]: time="2025-02-13T19:29:34.179935354Z" level=info msg="TearDown network for sandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\" successfully" Feb 13 19:29:34.183791 containerd[1509]: time="2025-02-13T19:29:34.183771992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.183886 containerd[1509]: time="2025-02-13T19:29:34.183862582Z" level=info msg="RemovePodSandbox \"110fd01f59b1f81df4c2f14583ef23c985a303b4cdfcaf14828bfe2a6275d169\" returns successfully" Feb 13 19:29:34.184114 containerd[1509]: time="2025-02-13T19:29:34.184094456Z" level=info msg="StopPodSandbox for \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\"" Feb 13 19:29:34.184230 containerd[1509]: time="2025-02-13T19:29:34.184183194Z" level=info msg="TearDown network for sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\" successfully" Feb 13 19:29:34.184230 containerd[1509]: time="2025-02-13T19:29:34.184224712Z" level=info msg="StopPodSandbox for \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\" returns successfully" Feb 13 19:29:34.184509 containerd[1509]: time="2025-02-13T19:29:34.184491091Z" level=info msg="RemovePodSandbox for \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\"" Feb 13 19:29:34.184612 containerd[1509]: time="2025-02-13T19:29:34.184511259Z" level=info msg="Forcibly stopping sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\"" Feb 13 19:29:34.184612 containerd[1509]: time="2025-02-13T19:29:34.184572153Z" level=info msg="TearDown network for sandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\" successfully" Feb 13 19:29:34.188506 containerd[1509]: time="2025-02-13T19:29:34.188474875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.188554 containerd[1509]: time="2025-02-13T19:29:34.188505993Z" level=info msg="RemovePodSandbox \"4fcbdebc65dd03995bfbdcc6d0f131b1741766c07a210b7d1f09717b0e9fa6a2\" returns successfully" Feb 13 19:29:34.188716 containerd[1509]: time="2025-02-13T19:29:34.188697482Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\"" Feb 13 19:29:34.188803 containerd[1509]: time="2025-02-13T19:29:34.188786639Z" level=info msg="TearDown network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" successfully" Feb 13 19:29:34.188803 containerd[1509]: time="2025-02-13T19:29:34.188798151Z" level=info msg="StopPodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" returns successfully" Feb 13 19:29:34.189057 containerd[1509]: time="2025-02-13T19:29:34.189022582Z" level=info msg="RemovePodSandbox for \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\"" Feb 13 19:29:34.189120 containerd[1509]: time="2025-02-13T19:29:34.189064260Z" level=info msg="Forcibly stopping sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\"" Feb 13 19:29:34.189189 containerd[1509]: time="2025-02-13T19:29:34.189147095Z" level=info msg="TearDown network for sandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" successfully" Feb 13 19:29:34.193107 containerd[1509]: time="2025-02-13T19:29:34.193073691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.193164 containerd[1509]: time="2025-02-13T19:29:34.193120539Z" level=info msg="RemovePodSandbox \"7ffbc3e38e15b704bc39c47b90d3c589d355a9b724dfa1c4abbdc7847e2422fe\" returns successfully" Feb 13 19:29:34.193348 containerd[1509]: time="2025-02-13T19:29:34.193321185Z" level=info msg="StopPodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\"" Feb 13 19:29:34.193426 containerd[1509]: time="2025-02-13T19:29:34.193397529Z" level=info msg="TearDown network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" successfully" Feb 13 19:29:34.193457 containerd[1509]: time="2025-02-13T19:29:34.193423898Z" level=info msg="StopPodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" returns successfully" Feb 13 19:29:34.193652 containerd[1509]: time="2025-02-13T19:29:34.193626969Z" level=info msg="RemovePodSandbox for \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\"" Feb 13 19:29:34.193652 containerd[1509]: time="2025-02-13T19:29:34.193644361Z" level=info msg="Forcibly stopping sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\"" Feb 13 19:29:34.193722 containerd[1509]: time="2025-02-13T19:29:34.193700056Z" level=info msg="TearDown network for sandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" successfully" Feb 13 19:29:34.197224 containerd[1509]: time="2025-02-13T19:29:34.197191156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.197274 containerd[1509]: time="2025-02-13T19:29:34.197231141Z" level=info msg="RemovePodSandbox \"f53ba0cc5edf66bc1771aef78af6c7048ba198ecc09a7049783c39f9a8f3b81d\" returns successfully" Feb 13 19:29:34.197481 containerd[1509]: time="2025-02-13T19:29:34.197452686Z" level=info msg="StopPodSandbox for \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\"" Feb 13 19:29:34.197546 containerd[1509]: time="2025-02-13T19:29:34.197529210Z" level=info msg="TearDown network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\" successfully" Feb 13 19:29:34.197546 containerd[1509]: time="2025-02-13T19:29:34.197541473Z" level=info msg="StopPodSandbox for \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\" returns successfully" Feb 13 19:29:34.197806 containerd[1509]: time="2025-02-13T19:29:34.197777275Z" level=info msg="RemovePodSandbox for \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\"" Feb 13 19:29:34.197806 containerd[1509]: time="2025-02-13T19:29:34.197798985Z" level=info msg="Forcibly stopping sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\"" Feb 13 19:29:34.197972 containerd[1509]: time="2025-02-13T19:29:34.197863156Z" level=info msg="TearDown network for sandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\" successfully" Feb 13 19:29:34.201512 containerd[1509]: time="2025-02-13T19:29:34.201484930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.201563 containerd[1509]: time="2025-02-13T19:29:34.201518463Z" level=info msg="RemovePodSandbox \"ce940e7374472f2488c9aee2e7a99b39fe13593214275df52c11f97ced463769\" returns successfully" Feb 13 19:29:34.201743 containerd[1509]: time="2025-02-13T19:29:34.201721624Z" level=info msg="StopPodSandbox for \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\"" Feb 13 19:29:34.201863 containerd[1509]: time="2025-02-13T19:29:34.201809770Z" level=info msg="TearDown network for sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\" successfully" Feb 13 19:29:34.201863 containerd[1509]: time="2025-02-13T19:29:34.201835658Z" level=info msg="StopPodSandbox for \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\" returns successfully" Feb 13 19:29:34.202040 containerd[1509]: time="2025-02-13T19:29:34.202023230Z" level=info msg="RemovePodSandbox for \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\"" Feb 13 19:29:34.202082 containerd[1509]: time="2025-02-13T19:29:34.202042125Z" level=info msg="Forcibly stopping sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\"" Feb 13 19:29:34.202136 containerd[1509]: time="2025-02-13T19:29:34.202110564Z" level=info msg="TearDown network for sandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\" successfully" Feb 13 19:29:34.205681 containerd[1509]: time="2025-02-13T19:29:34.205655594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:29:34.205715 containerd[1509]: time="2025-02-13T19:29:34.205685861Z" level=info msg="RemovePodSandbox \"41ecdb55afd7079da54b6de5d7b9d380b6256f5bbbb61def291a488275d63764\" returns successfully" Feb 13 19:29:34.379670 sshd[5928]: Connection closed by 10.0.0.1 port 54722 Feb 13 19:29:34.383663 sshd-session[5921]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:34.397323 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:54722.service: Deactivated successfully. Feb 13 19:29:34.399396 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:29:34.400347 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:29:34.411117 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:54730.service - OpenSSH per-connection server daemon (10.0.0.1:54730). Feb 13 19:29:34.411781 systemd-logind[1491]: Removed session 20. Feb 13 19:29:34.445867 sshd[5939]: Accepted publickey for core from 10.0.0.1 port 54730 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:34.447288 sshd-session[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:34.451249 systemd-logind[1491]: New session 21 of user core. Feb 13 19:29:34.457879 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:29:34.571173 sshd[5942]: Connection closed by 10.0.0.1 port 54730 Feb 13 19:29:34.571535 sshd-session[5939]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:34.575600 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:54730.service: Deactivated successfully. Feb 13 19:29:34.577687 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:29:34.578337 systemd-logind[1491]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:29:34.579420 systemd-logind[1491]: Removed session 21. Feb 13 19:29:37.360536 kubelet[2633]: E0213 19:29:37.360491 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:29:39.588030 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:46956.service - OpenSSH per-connection server daemon (10.0.0.1:46956). Feb 13 19:29:39.634592 sshd[5987]: Accepted publickey for core from 10.0.0.1 port 46956 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:39.636089 sshd-session[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:39.640008 systemd-logind[1491]: New session 22 of user core. Feb 13 19:29:39.650879 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:29:39.767091 sshd[5989]: Connection closed by 10.0.0.1 port 46956 Feb 13 19:29:39.767456 sshd-session[5987]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:39.770828 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:46956.service: Deactivated successfully. Feb 13 19:29:39.772681 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:29:39.773342 systemd-logind[1491]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:29:39.774273 systemd-logind[1491]: Removed session 22. Feb 13 19:29:44.028139 kubelet[2633]: I0213 19:29:44.028088 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:29:44.780901 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:46962.service - OpenSSH per-connection server daemon (10.0.0.1:46962). Feb 13 19:29:44.821133 sshd[6008]: Accepted publickey for core from 10.0.0.1 port 46962 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:44.822517 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:44.826509 systemd-logind[1491]: New session 23 of user core. Feb 13 19:29:44.834871 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:29:44.955072 sshd[6010]: Connection closed by 10.0.0.1 port 46962 Feb 13 19:29:44.955512 sshd-session[6008]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:44.960050 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:46962.service: Deactivated successfully. Feb 13 19:29:44.962254 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:29:44.962935 systemd-logind[1491]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:29:44.963857 systemd-logind[1491]: Removed session 23. Feb 13 19:29:49.978217 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:59516.service - OpenSSH per-connection server daemon (10.0.0.1:59516). Feb 13 19:29:50.026219 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 59516 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:50.027933 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:50.032266 systemd-logind[1491]: New session 24 of user core. Feb 13 19:29:50.042919 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:29:50.162167 sshd[6045]: Connection closed by 10.0.0.1 port 59516 Feb 13 19:29:50.162615 sshd-session[6043]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:50.166903 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:59516.service: Deactivated successfully. Feb 13 19:29:50.169035 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:29:50.169713 systemd-logind[1491]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:29:50.170595 systemd-logind[1491]: Removed session 24. Feb 13 19:29:55.179804 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:59522.service - OpenSSH per-connection server daemon (10.0.0.1:59522). Feb 13 19:29:55.218673 sshd[6059]: Accepted publickey for core from 10.0.0.1 port 59522 ssh2: RSA SHA256:0O8mujegci5MXpzc9r+9o1el1lByj+eEPDFYqhpZCLY Feb 13 19:29:55.220196 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:55.224216 systemd-logind[1491]: New session 25 of user core. Feb 13 19:29:55.233886 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:29:55.343400 sshd[6061]: Connection closed by 10.0.0.1 port 59522 Feb 13 19:29:55.343830 sshd-session[6059]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:55.347362 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:59522.service: Deactivated successfully. Feb 13 19:29:55.349273 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:29:55.350044 systemd-logind[1491]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:29:55.350967 systemd-logind[1491]: Removed session 25.