Feb 13 19:33:26.874016 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:33:26.874042 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:33:26.874056 kernel: BIOS-provided physical RAM map: Feb 13 19:33:26.874065 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:33:26.874074 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:33:26.874083 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:33:26.874093 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:33:26.874103 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:33:26.874111 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:33:26.874123 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:33:26.874131 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:33:26.874139 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:33:26.874147 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:33:26.874155 kernel: NX (Execute Disable) protection: active Feb 13 19:33:26.874165 kernel: APIC: Static calls initialized Feb 13 19:33:26.874176 kernel: SMBIOS 2.8 present. Feb 13 19:33:26.874185 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:33:26.874194 kernel: Hypervisor detected: KVM Feb 13 19:33:26.874203 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:33:26.874211 kernel: kvm-clock: using sched offset of 2292694753 cycles Feb 13 19:33:26.874220 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:33:26.874229 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:33:26.874239 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:33:26.874248 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:33:26.874257 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:33:26.874299 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:33:26.874309 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:33:26.874318 kernel: Using GB pages for direct mapping Feb 13 19:33:26.874327 kernel: ACPI: Early table checksum verification disabled Feb 13 19:33:26.874336 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:33:26.874345 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:26.874355 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:26.874365 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:26.874377 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:33:26.874387 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:26.874397 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:26.874406 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:26.874416 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:26.874426 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:33:26.874436 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:33:26.874451 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:33:26.874462 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:33:26.874471 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:33:26.874481 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:33:26.874490 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:33:26.874499 kernel: No NUMA configuration found Feb 13 19:33:26.874508 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:33:26.874518 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:33:26.874529 kernel: Zone ranges: Feb 13 19:33:26.874538 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:33:26.874547 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:33:26.874556 kernel: Normal empty Feb 13 19:33:26.874566 kernel: Movable zone start for each node Feb 13 19:33:26.874575 kernel: Early memory node ranges Feb 13 19:33:26.874595 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:33:26.874605 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:33:26.874614 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:33:26.874627 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:33:26.874636 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:33:26.874645 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:33:26.874655 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:33:26.874664 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:33:26.874673 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:33:26.874683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:33:26.874692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:33:26.874701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:33:26.874714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:33:26.874724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:33:26.874733 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:33:26.874740 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:33:26.874747 kernel: TSC deadline timer available Feb 13 19:33:26.874754 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:33:26.874761 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:33:26.874768 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:33:26.874775 kernel: kvm-guest: setup PV sched yield Feb 13 19:33:26.874785 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:33:26.874792 kernel: Booting paravirtualized kernel on KVM Feb 13 19:33:26.874800 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:33:26.874807 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:33:26.874814 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:33:26.874821 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:33:26.874828 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:33:26.874835 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:33:26.874842 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:33:26.874850 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:33:26.874860 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:33:26.874867 kernel: random: crng init done Feb 13 19:33:26.874874 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:33:26.874882 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:33:26.874889 kernel: Fallback order for Node 0: 0 Feb 13 19:33:26.874896 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:33:26.874903 kernel: Policy zone: DMA32 Feb 13 19:33:26.874910 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:33:26.874920 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Feb 13 19:33:26.874927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:33:26.874934 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:33:26.874941 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:33:26.874948 kernel: Dynamic Preempt: voluntary Feb 13 19:33:26.874955 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:33:26.874970 kernel: rcu: RCU event tracing is enabled. Feb 13 19:33:26.874980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:33:26.874989 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:33:26.874999 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:33:26.875006 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:33:26.875013 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:33:26.875020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:33:26.875028 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:33:26.875035 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:33:26.875042 kernel: Console: colour VGA+ 80x25 Feb 13 19:33:26.875049 kernel: printk: console [ttyS0] enabled Feb 13 19:33:26.875056 kernel: ACPI: Core revision 20230628 Feb 13 19:33:26.875066 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:33:26.875073 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:33:26.875080 kernel: x2apic enabled Feb 13 19:33:26.875087 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:33:26.875094 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:33:26.875101 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:33:26.875109 kernel: kvm-guest: setup PV IPIs Feb 13 19:33:26.875125 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:33:26.875133 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:33:26.875140 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:33:26.875147 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:33:26.875155 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:33:26.875164 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:33:26.875172 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:33:26.875179 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:33:26.875187 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:33:26.875194 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:33:26.875204 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:33:26.875212 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:33:26.875219 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:33:26.875227 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:33:26.875234 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:33:26.875242 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:33:26.875250 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:33:26.875257 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:33:26.875287 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:33:26.875294 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:33:26.875302 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:33:26.875309 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:33:26.875317 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:33:26.875324 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:33:26.875332 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:33:26.875339 kernel: landlock: Up and running. Feb 13 19:33:26.875347 kernel: SELinux: Initializing. Feb 13 19:33:26.875357 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:33:26.875365 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:33:26.875372 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:33:26.875380 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:26.875388 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:26.875395 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:26.875403 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:33:26.875410 kernel: ... version: 0 Feb 13 19:33:26.875418 kernel: ... bit width: 48 Feb 13 19:33:26.875428 kernel: ... generic registers: 6 Feb 13 19:33:26.875435 kernel: ... value mask: 0000ffffffffffff Feb 13 19:33:26.875442 kernel: ... max period: 00007fffffffffff Feb 13 19:33:26.875450 kernel: ... fixed-purpose events: 0 Feb 13 19:33:26.875457 kernel: ... event mask: 000000000000003f Feb 13 19:33:26.875464 kernel: signal: max sigframe size: 1776 Feb 13 19:33:26.875472 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:33:26.875479 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:33:26.875487 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:33:26.875497 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:33:26.875504 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:33:26.875511 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:33:26.875519 kernel: smpboot: Max logical packages: 1 Feb 13 19:33:26.875526 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:33:26.875533 kernel: devtmpfs: initialized Feb 13 19:33:26.875541 kernel: x86/mm: Memory block size: 128MB Feb 13 19:33:26.875548 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:33:26.875556 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:33:26.875566 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:33:26.875573 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:33:26.875589 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:33:26.875597 kernel: audit: type=2000 audit(1739475207.256:1): state=initialized audit_enabled=0 res=1 Feb 13 19:33:26.875605 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:33:26.875612 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:33:26.875620 kernel: cpuidle: using governor menu Feb 13 19:33:26.875627 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:33:26.875634 kernel: dca service started, version 1.12.1 Feb 13 19:33:26.875644 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:33:26.875652 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:33:26.875660 kernel: PCI: Using configuration type 1 for base access Feb 13 19:33:26.875667 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:33:26.875675 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:33:26.875682 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:33:26.875690 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:33:26.875697 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:33:26.875705 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:33:26.875714 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:33:26.875722 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:33:26.875730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:33:26.875737 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:33:26.875744 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:33:26.875752 kernel: ACPI: Interpreter enabled Feb 13 19:33:26.875759 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:33:26.875766 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:33:26.875774 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:33:26.875784 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:33:26.875791 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:33:26.875799 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:33:26.875975 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:33:26.876111 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:33:26.876231 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:33:26.876241 kernel: PCI host bridge to bus 0000:00 Feb 13 19:33:26.876401 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:33:26.876541 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:33:26.876711 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:33:26.876934 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:33:26.877058 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:33:26.877175 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:33:26.877301 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:33:26.877445 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:33:26.877575 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:33:26.877710 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:33:26.877828 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:33:26.877945 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:33:26.878072 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:33:26.878202 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:33:26.878355 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:33:26.878566 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:33:26.878736 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:33:26.878908 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:33:26.879044 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:33:26.879167 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:33:26.879321 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:33:26.879453 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:33:26.879573 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:33:26.879703 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:33:26.879830 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:33:26.879963 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:33:26.880107 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:33:26.880301 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:33:26.880445 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:33:26.880566 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:33:26.880695 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:33:26.880827 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:33:26.880947 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:33:26.880957 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:33:26.880973 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:33:26.880983 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:33:26.880993 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:33:26.881002 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:33:26.881009 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:33:26.881017 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:33:26.881025 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:33:26.881032 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:33:26.881040 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:33:26.881050 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:33:26.881057 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:33:26.881065 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:33:26.881072 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:33:26.881080 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:33:26.881088 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:33:26.881095 kernel: iommu: Default domain type: Translated Feb 13 19:33:26.881103 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:33:26.881110 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:33:26.881120 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:33:26.881127 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:33:26.881135 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:33:26.881322 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:33:26.881466 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:33:26.881599 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:33:26.881610 kernel: vgaarb: loaded Feb 13 19:33:26.881618 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:33:26.881630 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:33:26.881637 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:33:26.881645 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:33:26.881652 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:33:26.881660 kernel: pnp: PnP ACPI init Feb 13 19:33:26.881789 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:33:26.881800 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:33:26.881808 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:33:26.881819 kernel: NET: Registered PF_INET protocol family Feb 13 19:33:26.881827 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:33:26.881835 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:33:26.881842 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:33:26.881850 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:33:26.881858 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:33:26.881865 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:33:26.881873 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:33:26.881881 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:33:26.881891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:33:26.881898 kernel: NET: Registered PF_XDP protocol family Feb 13 19:33:26.882018 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:33:26.882130 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:33:26.882238 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:33:26.882363 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:33:26.882473 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:33:26.882589 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:33:26.882603 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:33:26.882611 kernel: Initialise system trusted keyrings Feb 13 19:33:26.882620 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:33:26.882627 kernel: Key type asymmetric registered Feb 13 19:33:26.882635 kernel: Asymmetric key parser 'x509' registered Feb 13 19:33:26.882642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:33:26.882650 kernel: io scheduler mq-deadline registered Feb 13 19:33:26.882657 kernel: io scheduler kyber registered Feb 13 19:33:26.882665 kernel: io scheduler bfq registered Feb 13 19:33:26.882675 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:33:26.882683 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:33:26.882691 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:33:26.882698 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:33:26.882706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:33:26.882714 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:33:26.882722 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:33:26.882729 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:33:26.882737 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:33:26.882747 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:33:26.882871 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:33:26.882991 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:33:26.883109 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:33:26 UTC (1739475206) Feb 13 19:33:26.883221 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:33:26.883230 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:33:26.883238 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:33:26.883245 kernel: Segment Routing with IPv6 Feb 13 19:33:26.883257 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:33:26.883264 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:33:26.883296 kernel: Key type dns_resolver registered Feb 13 19:33:26.883303 kernel: IPI shorthand broadcast: enabled Feb 13 19:33:26.883311 kernel: sched_clock: Marking stable (538005596, 112615755)->(693146259, -42524908) Feb 13 19:33:26.883319 kernel: registered taskstats version 1 Feb 13 19:33:26.883326 kernel: Loading compiled-in X.509 certificates Feb 13 19:33:26.883334 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:33:26.883342 kernel: Key type .fscrypt registered Feb 13 19:33:26.883352 kernel: Key type fscrypt-provisioning registered Feb 13 19:33:26.883360 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:33:26.883367 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:33:26.883375 kernel: ima: No architecture policies found Feb 13 19:33:26.883382 kernel: clk: Disabling unused clocks Feb 13 19:33:26.883390 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:33:26.883397 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:33:26.883405 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:33:26.883412 kernel: Run /init as init process Feb 13 19:33:26.883422 kernel: with arguments: Feb 13 19:33:26.883429 kernel: /init Feb 13 19:33:26.883437 kernel: with environment: Feb 13 19:33:26.883444 kernel: HOME=/ Feb 13 19:33:26.883452 kernel: TERM=linux Feb 13 19:33:26.883459 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:33:26.883469 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:33:26.883478 systemd[1]: Detected virtualization kvm. Feb 13 19:33:26.883489 systemd[1]: Detected architecture x86-64. Feb 13 19:33:26.883497 systemd[1]: Running in initrd. Feb 13 19:33:26.883505 systemd[1]: No hostname configured, using default hostname. Feb 13 19:33:26.883513 systemd[1]: Hostname set to <localhost>. Feb 13 19:33:26.883521 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:33:26.883529 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:33:26.883537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:26.883545 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:26.883557 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:33:26.883576 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:33:26.883594 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:33:26.883603 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:33:26.883613 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:33:26.883624 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:33:26.883633 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:26.883641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:33:26.883649 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:33:26.883657 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:33:26.883666 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:33:26.883674 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:33:26.883682 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:33:26.883692 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:33:26.883701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:33:26.883709 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:33:26.883717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:26.883726 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:26.883734 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:26.883742 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:33:26.883751 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:33:26.883759 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:33:26.883769 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:33:26.883778 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:33:26.883786 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:33:26.883797 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:33:26.883805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:26.883814 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:33:26.883822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:26.883831 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:33:26.883862 systemd-journald[191]: Collecting audit messages is disabled. Feb 13 19:33:26.883883 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:33:26.883894 systemd-journald[191]: Journal started Feb 13 19:33:26.883914 systemd-journald[191]: Runtime Journal (/run/log/journal/20cddeadac454e30a983f5679eb08455) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:33:26.878939 systemd-modules-load[193]: Inserted module 'overlay' Feb 13 19:33:26.911636 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:33:26.911660 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:33:26.911674 kernel: Bridge firewalling registered Feb 13 19:33:26.905498 systemd-modules-load[193]: Inserted module 'br_netfilter' Feb 13 19:33:26.913412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:26.915059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:26.922490 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:26.925647 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:33:26.928159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:33:26.930894 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:33:26.937111 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:33:26.941813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:26.943248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:26.945961 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:33:26.948337 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:26.949698 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:33:26.961314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:26.964985 dracut-cmdline[228]: dracut-dracut-053 Feb 13 19:33:26.968314 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:33:26.979393 systemd-resolved[227]: Positive Trust Anchors: Feb 13 19:33:26.979404 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:33:26.979436 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:33:26.981945 systemd-resolved[227]: Defaulting to hostname 'linux'. Feb 13 19:33:26.982947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:33:26.989039 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:27.049299 kernel: SCSI subsystem initialized Feb 13 19:33:27.058297 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:33:27.069314 kernel: iscsi: registered transport (tcp) Feb 13 19:33:27.089298 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:33:27.089320 kernel: QLogic iSCSI HBA Driver Feb 13 19:33:27.138217 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:33:27.146558 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:33:27.175968 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:33:27.176001 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:33:27.177361 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:33:27.218309 kernel: raid6: avx2x4 gen() 29677 MB/s Feb 13 19:33:27.235292 kernel: raid6: avx2x2 gen() 30465 MB/s Feb 13 19:33:27.252373 kernel: raid6: avx2x1 gen() 25831 MB/s Feb 13 19:33:27.252400 kernel: raid6: using algorithm avx2x2 gen() 30465 MB/s Feb 13 19:33:27.270380 kernel: raid6: .... xor() 19906 MB/s, rmw enabled Feb 13 19:33:27.270402 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:33:27.290307 kernel: xor: automatically using best checksumming function avx Feb 13 19:33:27.440311 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:33:27.453907 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:33:27.467526 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:27.481406 systemd-udevd[413]: Using default interface naming scheme 'v255'. Feb 13 19:33:27.486060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:27.494422 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:33:27.510022 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 13 19:33:27.544893 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:33:27.554401 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:33:27.623553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:27.632515 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:33:27.648211 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:33:27.649901 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:33:27.653384 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:27.654693 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:33:27.666365 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:33:27.703721 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:33:27.703743 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:33:27.703930 kernel: libata version 3.00 loaded. Feb 13 19:33:27.703950 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:33:27.703972 kernel: AES CTR mode by8 optimization enabled Feb 13 19:33:27.703987 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:33:27.704001 kernel: GPT:9289727 != 19775487 Feb 13 19:33:27.704014 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:33:27.704027 kernel: GPT:9289727 != 19775487 Feb 13 19:33:27.704040 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:33:27.718982 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:33:27.719001 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:33:27.719015 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:27.719032 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:33:27.719181 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:33:27.719335 kernel: scsi host0: ahci Feb 13 19:33:27.719486 kernel: scsi host1: ahci Feb 13 19:33:27.719643 kernel: scsi host2: ahci Feb 13 19:33:27.719798 kernel: scsi host3: ahci Feb 13 19:33:27.719945 kernel: scsi host4: ahci Feb 13 19:33:27.720119 kernel: scsi host5: ahci Feb 13 19:33:27.721101 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:33:27.721116 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:33:27.721127 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:33:27.721162 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:33:27.721173 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:33:27.721183 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:33:27.671453 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:33:27.764499 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (464) Feb 13 19:33:27.764517 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (472) Feb 13 19:33:27.682760 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:33:27.686169 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:33:27.686384 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:27.690037 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:27.691195 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:33:27.691651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:27.692880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:27.699461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:27.734055 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:33:27.771478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:27.791412 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:33:27.797535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:33:27.800066 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:33:27.807820 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:33:27.823399 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:33:27.830086 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:27.847541 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:28.000356 disk-uuid[553]: Primary Header is updated. Feb 13 19:33:28.000356 disk-uuid[553]: Secondary Entries is updated. Feb 13 19:33:28.000356 disk-uuid[553]: Secondary Header is updated. Feb 13 19:33:28.004369 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:28.030418 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:28.030451 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:28.033465 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:28.033491 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:28.033505 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:28.034538 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:33:28.035898 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:33:28.035929 kernel: ata3.00: applying bridge limits Feb 13 19:33:28.036647 kernel: ata3.00: configured for UDMA/100 Feb 13 19:33:28.038387 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:33:28.084864 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:33:28.097965 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:33:28.097982 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:33:29.041851 disk-uuid[563]: The operation has completed successfully. Feb 13 19:33:29.043263 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:29.071121 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:33:29.071247 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:33:29.093431 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:33:29.099472 sh[591]: Success Feb 13 19:33:29.112299 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:33:29.146506 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:33:29.160764 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:33:29.164049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:33:29.179322 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:33:29.179351 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:29.179363 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:33:29.181703 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:33:29.181715 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:33:29.186168 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:33:29.188561 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:33:29.203498 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:33:29.206625 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:33:29.214345 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:33:29.214367 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:29.214377 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:29.217660 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:29.226796 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:33:29.247288 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:33:29.312337 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:33:29.327413 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:33:29.349104 systemd-networkd[769]: lo: Link UP Feb 13 19:33:29.349115 systemd-networkd[769]: lo: Gained carrier Feb 13 19:33:29.350982 systemd-networkd[769]: Enumeration completed Feb 13 19:33:29.351589 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:29.351593 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:33:29.352464 systemd-networkd[769]: eth0: Link UP Feb 13 19:33:29.352469 systemd-networkd[769]: eth0: Gained carrier Feb 13 19:33:29.352477 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:29.354465 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:33:29.361162 systemd[1]: Reached target network.target - Network. Feb 13 19:33:29.367720 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:33:29.371371 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:33:29.378600 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:33:29.432287 ignition[774]: Ignition 2.20.0 Feb 13 19:33:29.432864 ignition[774]: Stage: fetch-offline Feb 13 19:33:29.432910 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:29.432920 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:29.433015 ignition[774]: parsed url from cmdline: "" Feb 13 19:33:29.433019 ignition[774]: no config URL provided Feb 13 19:33:29.433024 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:33:29.433034 ignition[774]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:33:29.433065 ignition[774]: op(1): [started] loading QEMU firmware config module Feb 13 19:33:29.433070 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:33:29.440661 ignition[774]: op(1): [finished] loading QEMU firmware config module Feb 13 19:33:29.440679 ignition[774]: QEMU firmware config was not found. Ignoring... Feb 13 19:33:29.480718 ignition[774]: parsing config with SHA512: c7bc807e25f20cb088283f4a19be763f0053feefa30a9ff6cbf7f99bcaf696d3962f35dca329b83de828e0d0cecede43b4b6b92231104f70fc29295f6a1e6378 Feb 13 19:33:29.484483 unknown[774]: fetched base config from "system" Feb 13 19:33:29.484495 unknown[774]: fetched user config from "qemu" Feb 13 19:33:29.485743 systemd-resolved[227]: Detected conflict on linux IN A 10.0.0.18 Feb 13 19:33:29.485892 ignition[774]: fetch-offline: fetch-offline passed Feb 13 19:33:29.485753 systemd-resolved[227]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Feb 13 19:33:29.485982 ignition[774]: Ignition finished successfully Feb 13 19:33:29.490670 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:33:29.493581 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:33:29.511408 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:33:29.525289 ignition[784]: Ignition 2.20.0 Feb 13 19:33:29.525760 ignition[784]: Stage: kargs Feb 13 19:33:29.525945 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:29.525955 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:29.526795 ignition[784]: kargs: kargs passed Feb 13 19:33:29.526837 ignition[784]: Ignition finished successfully Feb 13 19:33:29.530156 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:33:29.540441 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:33:29.552197 ignition[793]: Ignition 2.20.0 Feb 13 19:33:29.552208 ignition[793]: Stage: disks Feb 13 19:33:29.552400 ignition[793]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:29.552412 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:29.553228 ignition[793]: disks: disks passed Feb 13 19:33:29.553286 ignition[793]: Ignition finished successfully Feb 13 19:33:29.558845 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:33:29.560187 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:33:29.562118 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:33:29.563351 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:33:29.564379 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:33:29.566450 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:33:29.578520 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:33:29.592023 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:33:29.703050 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:33:29.714401 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:33:29.797289 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:33:29.797694 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:33:29.799900 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:33:29.809384 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:33:29.811240 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:33:29.813479 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:33:29.813534 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:33:29.819902 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Feb 13 19:33:29.813559 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:33:29.821503 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:33:29.825841 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:33:29.825854 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:29.825865 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:29.825875 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:29.831423 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:33:29.833325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:33:29.867158 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:33:29.871967 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:33:29.876422 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:33:29.880935 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:33:29.958286 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:33:29.968339 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:33:29.969395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:33:29.979289 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:33:29.994414 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:33:30.005049 ignition[926]: INFO : Ignition 2.20.0 Feb 13 19:33:30.005049 ignition[926]: INFO : Stage: mount Feb 13 19:33:30.006789 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:30.006789 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:30.006789 ignition[926]: INFO : mount: mount passed Feb 13 19:33:30.006789 ignition[926]: INFO : Ignition finished successfully Feb 13 19:33:30.012052 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:33:30.022385 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:33:30.178761 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:33:30.192520 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:33:30.200814 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Feb 13 19:33:30.200865 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:33:30.200890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:30.202321 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:30.205388 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:30.206839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:33:30.228126 ignition[956]: INFO : Ignition 2.20.0 Feb 13 19:33:30.228126 ignition[956]: INFO : Stage: files Feb 13 19:33:30.230218 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:30.230218 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:30.230218 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:33:30.234185 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:33:30.234185 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:33:30.234185 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:33:30.234185 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:33:30.234185 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:33:30.233327 unknown[956]: wrote ssh authorized keys file for user: core Feb 13 19:33:30.242831 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:33:30.242831 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:33:30.242831 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:33:30.242831 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:33:30.277623 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:33:30.425000 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:33:30.425000 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:33:30.429841 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:33:30.429841 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:33:30.433907 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:33:30.916991 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:33:31.160644 systemd-networkd[769]: eth0: Gained IPv6LL Feb 13 19:33:31.308378 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:33:31.308378 ignition[956]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 19:33:31.312320 ignition[956]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:33:31.340642 ignition[956]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:33:31.346330 ignition[956]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:33:31.347891 ignition[956]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:33:31.347891 ignition[956]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:33:31.347891 ignition[956]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:33:31.347891 ignition[956]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:33:31.347891 ignition[956]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:33:31.347891 ignition[956]: INFO : files: files passed Feb 13 19:33:31.347891 ignition[956]: INFO : Ignition finished successfully Feb 13 19:33:31.349572 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:33:31.363521 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:33:31.366523 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:33:31.368292 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:33:31.368412 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:33:31.376562 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:33:31.379428 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:31.379428 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:31.382729 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:31.386820 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:33:31.388297 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:33:31.397425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:33:31.423962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:33:31.424109 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:33:31.424997 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:33:31.427996 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:33:31.428565 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:33:31.438469 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:33:31.453376 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:33:31.465420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:33:31.476782 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:31.477346 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:31.477829 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:33:31.478130 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:33:31.478241 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:33:31.483619 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:33:31.483997 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:33:31.484363 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:33:31.490147 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:33:31.490815 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:33:31.491163 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:33:31.491655 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:33:31.492040 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:33:31.492395 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:33:31.492862 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:33:31.493174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:33:31.493304 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:33:31.494165 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:33:31.494696 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:31.494978 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:33:31.495087 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:31.511693 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:33:31.511870 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:33:31.514173 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:33:31.514294 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:33:31.514759 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:33:31.515010 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:33:31.523375 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:31.526317 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:33:31.526890 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:33:31.528700 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:33:31.528831 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:33:31.530703 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:33:31.530818 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:33:31.532331 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:33:31.532502 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:33:31.534242 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:33:31.534401 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:33:31.546458 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:33:31.547481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:33:31.547644 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:31.551516 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:33:31.553699 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:33:31.553983 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:31.556308 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:33:31.561416 ignition[1011]: INFO : Ignition 2.20.0 Feb 13 19:33:31.561416 ignition[1011]: INFO : Stage: umount Feb 13 19:33:31.561416 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:31.561416 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:31.556596 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:33:31.570966 ignition[1011]: INFO : umount: umount passed Feb 13 19:33:31.570966 ignition[1011]: INFO : Ignition finished successfully Feb 13 19:33:31.562980 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:33:31.563118 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:33:31.564861 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:33:31.564989 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:33:31.567802 systemd[1]: Stopped target network.target - Network. Feb 13 19:33:31.568832 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:33:31.568901 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:33:31.570971 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:33:31.571031 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:33:31.572876 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:33:31.572937 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:33:31.574924 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:33:31.574986 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:33:31.577347 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:33:31.579791 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:33:31.582353 systemd-networkd[769]: eth0: DHCPv6 lease lost Feb 13 19:33:31.583092 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:33:31.584690 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:33:31.584813 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:33:31.586395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:33:31.586439 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:31.596385 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:33:31.597870 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:33:31.597931 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:33:31.600080 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:31.602972 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:33:31.603092 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:33:31.607265 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:33:31.607399 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:31.608849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:33:31.608897 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:31.610199 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:33:31.610245 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:31.614943 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:33:31.615064 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:33:31.620955 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:33:31.621130 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:31.623139 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:33:31.623186 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:31.625004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:33:31.625041 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:31.627196 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:33:31.627243 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:33:31.629351 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:33:31.629398 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:33:31.631229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:33:31.631326 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:31.643452 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:33:31.645543 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:33:31.645607 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:31.647889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:33:31.647937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:31.652101 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:33:31.652240 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:33:31.727825 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:33:31.727955 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:33:31.728816 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:33:31.730939 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:33:31.730992 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:33:31.741390 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:33:31.749232 systemd[1]: Switching root. Feb 13 19:33:31.782429 systemd-journald[191]: Journal stopped Feb 13 19:33:32.884934 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Feb 13 19:33:32.885014 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:33:32.885046 kernel: SELinux: policy capability open_perms=1 Feb 13 19:33:32.885068 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:33:32.885084 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:33:32.885101 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:33:32.885121 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:33:32.885137 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:33:32.885152 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:33:32.885167 kernel: audit: type=1403 audit(1739475212.143:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:33:32.885190 systemd[1]: Successfully loaded SELinux policy in 40.324ms. Feb 13 19:33:32.885224 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.509ms. Feb 13 19:33:32.885241 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:33:32.885257 systemd[1]: Detected virtualization kvm. Feb 13 19:33:32.886706 systemd[1]: Detected architecture x86-64. Feb 13 19:33:32.886744 systemd[1]: Detected first boot. Feb 13 19:33:32.886758 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:33:32.886771 zram_generator::config[1077]: No configuration found. Feb 13 19:33:32.886786 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:33:32.886798 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:33:32.886810 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:33:32.886824 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:33:32.886839 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:33:32.886853 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:33:32.886865 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:33:32.886878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:33:32.886890 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:33:32.886902 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:33:32.886914 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:33:32.886926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:32.886938 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:32.886952 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:33:32.886967 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:33:32.886980 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:33:32.886993 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:33:32.887006 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:33:32.887018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:32.887030 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:33:32.887122 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:32.887135 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:33:32.887147 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:33:32.887161 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:33:32.887173 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:33:32.887187 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:33:32.887199 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:33:32.887214 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:33:32.887226 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:32.887238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:32.887250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:32.887265 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:33:32.887290 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:33:32.887303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:33:32.887315 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:33:32.887327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:32.887339 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:33:32.887351 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:33:32.887364 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:33:32.887376 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:33:32.887391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:32.887403 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:33:32.887423 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:33:32.887435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:32.887447 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:33:32.887460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:32.887476 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:33:32.887488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:32.887504 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:33:32.887516 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:33:32.887529 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:33:32.887542 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:33:32.887554 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:33:32.887566 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:33:32.887578 kernel: fuse: init (API version 7.39) Feb 13 19:33:32.887592 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:33:32.887606 kernel: loop: module loaded Feb 13 19:33:32.887652 systemd-journald[1151]: Collecting audit messages is disabled. Feb 13 19:33:32.887682 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:33:32.887695 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:32.887708 systemd-journald[1151]: Journal started Feb 13 19:33:32.887731 systemd-journald[1151]: Runtime Journal (/run/log/journal/20cddeadac454e30a983f5679eb08455) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:33:32.892588 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:33:32.893934 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:33:32.895436 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:33:32.896639 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:33:32.897885 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:33:32.899158 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:33:32.900450 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:33:32.901933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:32.903732 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:33:32.904003 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:33:32.905562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:32.905832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:32.907372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:32.907649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:32.909232 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:33:32.909515 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:33:32.910956 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:32.911216 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:32.912743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:32.914502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:33:32.923294 kernel: ACPI: bus type drm_connector registered Feb 13 19:33:32.924104 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:33:32.925849 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:33:32.926087 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:33:32.940371 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:33:32.952397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:33:32.967359 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:33:32.977623 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:33:33.022469 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:33:33.026941 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:33:33.028120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:33:33.032436 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:33:33.046286 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:33:33.048334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:33:33.050964 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:33:33.054070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:33.058229 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:33:33.059533 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:33:33.068515 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:33:33.086764 systemd-journald[1151]: Time spent on flushing to /var/log/journal/20cddeadac454e30a983f5679eb08455 is 13.617ms for 944 entries. Feb 13 19:33:33.086764 systemd-journald[1151]: System Journal (/var/log/journal/20cddeadac454e30a983f5679eb08455) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:33:33.276564 systemd-journald[1151]: Received client request to flush runtime journal. Feb 13 19:33:33.093103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:33.112527 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:33:33.114782 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:33:33.140457 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Feb 13 19:33:33.140472 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Feb 13 19:33:33.146896 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:33:33.154518 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:33:33.176906 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:33:33.194647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:33:33.198803 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:33:33.200748 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:33:33.214793 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Feb 13 19:33:33.214807 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Feb 13 19:33:33.219885 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:33.278614 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:33:33.749624 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:33:33.763611 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:33.787847 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Feb 13 19:33:33.803256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:33.817422 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:33:33.828471 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:33:33.836008 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 19:33:33.849296 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1242) Feb 13 19:33:33.898393 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:33:33.894106 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:33:33.903068 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:33:33.928176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:33:33.951359 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:33:33.953872 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:33:33.958415 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:33:33.958602 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:33:33.961483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:33.973302 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:33:33.990744 systemd-networkd[1246]: lo: Link UP Feb 13 19:33:33.990755 systemd-networkd[1246]: lo: Gained carrier Feb 13 19:33:33.993148 systemd-networkd[1246]: Enumeration completed Feb 13 19:33:33.993298 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:33:33.998610 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:33.998616 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:33:34.000125 systemd-networkd[1246]: eth0: Link UP Feb 13 19:33:34.000129 systemd-networkd[1246]: eth0: Gained carrier Feb 13 19:33:34.000211 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:34.031149 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:33:34.039359 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:33:34.048304 kernel: kvm_amd: TSC scaling supported Feb 13 19:33:34.048539 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:33:34.048573 kernel: kvm_amd: Nested Paging enabled Feb 13 19:33:34.048632 kernel: kvm_amd: LBR virtualization supported Feb 13 19:33:34.048667 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:33:34.048702 kernel: kvm_amd: Virtual GIF supported Feb 13 19:33:34.069797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:34.070470 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:33:34.109931 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:33:34.123552 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:33:34.131993 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:33:34.167011 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:33:34.168699 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:33:34.182501 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:33:34.187659 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:33:34.224642 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:33:34.226345 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:33:34.227631 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:33:34.227658 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:33:34.228670 systemd[1]: Reached target machines.target - Containers. Feb 13 19:33:34.231153 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:33:34.246583 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:33:34.250125 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:33:34.251399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:34.252658 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:33:34.255647 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:33:34.259474 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:33:34.262492 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:33:34.297309 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:33:34.300471 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:33:34.324305 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:33:34.348290 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 19:33:34.362249 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:33:34.363099 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:33:34.374291 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 19:33:34.407300 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:33:34.417410 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 19:33:34.424283 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 19:33:34.431638 (sd-merge)[1309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:33:34.432263 (sd-merge)[1309]: Merged extensions into '/usr'. Feb 13 19:33:34.436192 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:33:34.436209 systemd[1]: Reloading... Feb 13 19:33:34.486313 zram_generator::config[1337]: No configuration found. Feb 13 19:33:34.530632 ldconfig[1294]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:33:34.620290 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:34.684808 systemd[1]: Reloading finished in 248 ms. Feb 13 19:33:34.703190 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:33:34.704767 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:33:34.723402 systemd[1]: Starting ensure-sysext.service... Feb 13 19:33:34.725620 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:33:34.730772 systemd[1]: Reloading requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:33:34.730786 systemd[1]: Reloading... Feb 13 19:33:34.748634 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:33:34.748987 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:33:34.749975 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:33:34.750287 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Feb 13 19:33:34.750373 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Feb 13 19:33:34.753672 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:33:34.753683 systemd-tmpfiles[1382]: Skipping /boot Feb 13 19:33:34.765585 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:33:34.765683 systemd-tmpfiles[1382]: Skipping /boot Feb 13 19:33:34.781334 zram_generator::config[1410]: No configuration found. Feb 13 19:33:34.900098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:34.966302 systemd[1]: Reloading finished in 235 ms. Feb 13 19:33:34.984987 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:35.002807 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:33:35.005796 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:33:35.008837 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:33:35.014198 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:33:35.018372 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:33:35.026920 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:35.027333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:35.030414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:35.037163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:35.044133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:35.045825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:35.046236 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:35.047242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:35.049215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:35.051432 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:33:35.053239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:35.053480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:35.056473 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:35.056824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:35.068083 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:33:35.071215 systemd[1]: Finished ensure-sysext.service. Feb 13 19:33:35.075411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:35.075604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:35.078411 augenrules[1496]: No rules Feb 13 19:33:35.082430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:35.085306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:33:35.092568 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:35.099416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:35.100624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:35.103477 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:33:35.106452 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:33:35.108355 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:35.109194 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:33:35.109625 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:33:35.112015 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:33:35.113768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:35.114074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:35.115658 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:33:35.116001 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:33:35.117654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:35.118004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:35.119898 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:35.120193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:35.120419 systemd-resolved[1458]: Positive Trust Anchors: Feb 13 19:33:35.120428 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:33:35.120460 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:33:35.124179 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:33:35.126214 systemd-resolved[1458]: Defaulting to hostname 'linux'. Feb 13 19:33:35.127388 systemd-networkd[1246]: eth0: Gained IPv6LL Feb 13 19:33:35.129800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:33:35.131896 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:33:35.136128 systemd[1]: Reached target network.target - Network. Feb 13 19:33:35.137116 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:33:35.138225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:35.139490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:33:35.139587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:33:35.139627 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:33:35.201838 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:33:35.203574 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:33:36.534772 systemd-resolved[1458]: Clock change detected. Flushing caches. Feb 13 19:33:36.534829 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:33:36.534881 systemd-timesyncd[1510]: Initial clock synchronization to Thu 2025-02-13 19:33:36.534712 UTC. Feb 13 19:33:36.535554 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:33:36.537003 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:33:36.538440 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:33:36.539869 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:33:36.539906 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:33:36.540930 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:33:36.542300 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:33:36.543626 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:33:36.545057 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:33:36.547032 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:33:36.550576 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:33:36.553158 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:33:36.560261 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:33:36.561427 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:33:36.562400 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:33:36.563532 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:33:36.563572 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:33:36.563594 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:33:36.565375 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:33:36.567910 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:33:36.570636 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:33:36.574898 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:33:36.577963 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:33:36.581160 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:33:36.584654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:36.585313 jq[1531]: false Feb 13 19:33:36.590285 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:33:36.593998 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:33:36.595829 extend-filesystems[1533]: Found loop3 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found loop4 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found loop5 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found sr0 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found vda Feb 13 19:33:36.595829 extend-filesystems[1533]: Found vda1 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found vda2 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found vda3 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found usr Feb 13 19:33:36.595829 extend-filesystems[1533]: Found vda4 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found vda6 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found vda7 Feb 13 19:33:36.595829 extend-filesystems[1533]: Found vda9 Feb 13 19:33:36.595829 extend-filesystems[1533]: Checking size of /dev/vda9 Feb 13 19:33:36.617604 extend-filesystems[1533]: Resized partition /dev/vda9 Feb 13 19:33:36.605932 dbus-daemon[1529]: [system] SELinux support is enabled Feb 13 19:33:36.599413 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:33:36.604260 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:33:36.611367 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:33:36.619783 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:33:36.621339 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:33:36.622622 extend-filesystems[1553]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:33:36.629084 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1255) Feb 13 19:33:36.624538 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:33:36.632723 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:33:36.646452 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:33:36.649734 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:33:36.658817 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:33:36.664473 update_engine[1557]: I20250213 19:33:36.664398 1557 main.cc:92] Flatcar Update Engine starting Feb 13 19:33:36.679862 update_engine[1557]: I20250213 19:33:36.665858 1557 update_check_scheduler.cc:74] Next update check in 2m4s Feb 13 19:33:36.680142 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:33:36.680610 extend-filesystems[1553]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:33:36.680610 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:33:36.680610 extend-filesystems[1553]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:33:36.680456 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:33:36.689167 jq[1567]: true Feb 13 19:33:36.683513 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:33:36.691763 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Feb 13 19:33:36.683864 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:33:36.686301 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:33:36.691829 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:33:36.692142 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:33:36.695206 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:33:36.695506 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:33:36.712953 jq[1578]: true Feb 13 19:33:36.715906 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:33:36.721504 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:33:36.722054 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:33:36.737245 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:33:36.746153 tar[1577]: linux-amd64/helm Feb 13 19:33:36.759240 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:33:36.760814 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:33:36.760919 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:33:36.760940 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:33:36.762274 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:33:36.762292 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:33:36.764612 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:33:36.770980 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:33:36.774911 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:33:36.775190 systemd-logind[1555]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:33:36.775211 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:33:36.775607 systemd-logind[1555]: New seat seat0. Feb 13 19:33:36.778938 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:33:36.785879 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:33:36.802700 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:33:36.803194 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:33:36.809842 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:33:36.813964 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:33:36.817869 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:33:36.820554 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:33:36.823691 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:33:36.833841 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:33:36.844144 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:33:36.849042 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:33:36.850586 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:33:36.937876 containerd[1579]: time="2025-02-13T19:33:36.937490314Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:33:36.961875 containerd[1579]: time="2025-02-13T19:33:36.961831093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:36.964081 containerd[1579]: time="2025-02-13T19:33:36.964056496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:36.964136 containerd[1579]: time="2025-02-13T19:33:36.964124383Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:33:36.964206 containerd[1579]: time="2025-02-13T19:33:36.964193162Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:33:36.964416 containerd[1579]: time="2025-02-13T19:33:36.964400972Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:33:36.964468 containerd[1579]: time="2025-02-13T19:33:36.964457568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:36.964586 containerd[1579]: time="2025-02-13T19:33:36.964564949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:36.964632 containerd[1579]: time="2025-02-13T19:33:36.964622086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:36.964948 containerd[1579]: time="2025-02-13T19:33:36.964930324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:36.965008 containerd[1579]: time="2025-02-13T19:33:36.964995286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:36.965056 containerd[1579]: time="2025-02-13T19:33:36.965044308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:36.965096 containerd[1579]: time="2025-02-13T19:33:36.965085585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:36.965227 containerd[1579]: time="2025-02-13T19:33:36.965213004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:36.965612 containerd[1579]: time="2025-02-13T19:33:36.965509721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:36.965752 containerd[1579]: time="2025-02-13T19:33:36.965735985Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:36.965814 containerd[1579]: time="2025-02-13T19:33:36.965787241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:33:36.965964 containerd[1579]: time="2025-02-13T19:33:36.965949345Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:33:36.966202 containerd[1579]: time="2025-02-13T19:33:36.966057748Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:33:36.972734 containerd[1579]: time="2025-02-13T19:33:36.972560080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:33:36.972734 containerd[1579]: time="2025-02-13T19:33:36.972598993Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:33:36.972734 containerd[1579]: time="2025-02-13T19:33:36.972613260Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:33:36.972734 containerd[1579]: time="2025-02-13T19:33:36.972626835Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:33:36.972734 containerd[1579]: time="2025-02-13T19:33:36.972639629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:33:36.972949 containerd[1579]: time="2025-02-13T19:33:36.972910327Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:33:36.973584 containerd[1579]: time="2025-02-13T19:33:36.973531812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:33:36.973843 containerd[1579]: time="2025-02-13T19:33:36.973820633Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:33:36.973881 containerd[1579]: time="2025-02-13T19:33:36.973848355Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:33:36.973902 containerd[1579]: time="2025-02-13T19:33:36.973887028Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:33:36.973922 containerd[1579]: time="2025-02-13T19:33:36.973908468Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:33:36.973941 containerd[1579]: time="2025-02-13T19:33:36.973926011Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:33:36.973970 containerd[1579]: time="2025-02-13T19:33:36.973943223Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:33:36.973970 containerd[1579]: time="2025-02-13T19:33:36.973962179Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:33:36.974006 containerd[1579]: time="2025-02-13T19:33:36.973981164Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:33:36.974006 containerd[1579]: time="2025-02-13T19:33:36.973998287Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:33:36.974041 containerd[1579]: time="2025-02-13T19:33:36.974014858Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:33:36.974041 containerd[1579]: time="2025-02-13T19:33:36.974031990Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:33:36.974076 containerd[1579]: time="2025-02-13T19:33:36.974059501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974101 containerd[1579]: time="2025-02-13T19:33:36.974078828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974101 containerd[1579]: time="2025-02-13T19:33:36.974095840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974146 containerd[1579]: time="2025-02-13T19:33:36.974113232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974146 containerd[1579]: time="2025-02-13T19:33:36.974130635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974188 containerd[1579]: time="2025-02-13T19:33:36.974150662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974188 containerd[1579]: time="2025-02-13T19:33:36.974170209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974224 containerd[1579]: time="2025-02-13T19:33:36.974187051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974224 containerd[1579]: time="2025-02-13T19:33:36.974204944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974260 containerd[1579]: time="2025-02-13T19:33:36.974224721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974260 containerd[1579]: time="2025-02-13T19:33:36.974238447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974260 containerd[1579]: time="2025-02-13T19:33:36.974250229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974316 containerd[1579]: time="2025-02-13T19:33:36.974261951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974316 containerd[1579]: time="2025-02-13T19:33:36.974276248Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:33:36.974316 containerd[1579]: time="2025-02-13T19:33:36.974295774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974316 containerd[1579]: time="2025-02-13T19:33:36.974307857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974389 containerd[1579]: time="2025-02-13T19:33:36.974318417Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:33:36.974389 containerd[1579]: time="2025-02-13T19:33:36.974364694Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:33:36.974389 containerd[1579]: time="2025-02-13T19:33:36.974383008Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:33:36.974458 containerd[1579]: time="2025-02-13T19:33:36.974393628Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:33:36.974458 containerd[1579]: time="2025-02-13T19:33:36.974406191Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:33:36.974458 containerd[1579]: time="2025-02-13T19:33:36.974416090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.974458 containerd[1579]: time="2025-02-13T19:33:36.974429235Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:33:36.974458 containerd[1579]: time="2025-02-13T19:33:36.974439143Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:33:36.974458 containerd[1579]: time="2025-02-13T19:33:36.974450484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:33:36.975865 containerd[1579]: time="2025-02-13T19:33:36.974730199Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:33:36.975865 containerd[1579]: time="2025-02-13T19:33:36.974779331Z" level=info msg="Connect containerd service" Feb 13 19:33:36.975865 containerd[1579]: time="2025-02-13T19:33:36.974836268Z" level=info msg="using legacy CRI server" Feb 13 19:33:36.975865 containerd[1579]: time="2025-02-13T19:33:36.974843742Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:33:36.975865 containerd[1579]: time="2025-02-13T19:33:36.974955351Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:33:36.976926 containerd[1579]: time="2025-02-13T19:33:36.976854773Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:33:36.977036 containerd[1579]: time="2025-02-13T19:33:36.977010094Z" level=info msg="Start subscribing containerd event" Feb 13 19:33:36.977389 containerd[1579]: time="2025-02-13T19:33:36.977106194Z" level=info msg="Start recovering state" Feb 13 19:33:36.977389 containerd[1579]: time="2025-02-13T19:33:36.977188248Z" level=info msg="Start event monitor" Feb 13 19:33:36.977389 containerd[1579]: time="2025-02-13T19:33:36.977209277Z" level=info msg="Start snapshots syncer" Feb 13 19:33:36.977389 containerd[1579]: time="2025-02-13T19:33:36.977218504Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:33:36.977389 containerd[1579]: time="2025-02-13T19:33:36.977229495Z" level=info msg="Start streaming server" Feb 13 19:33:36.977500 containerd[1579]: time="2025-02-13T19:33:36.977368035Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:33:36.977500 containerd[1579]: time="2025-02-13T19:33:36.977448646Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:33:36.977537 containerd[1579]: time="2025-02-13T19:33:36.977528115Z" level=info msg="containerd successfully booted in 0.041648s" Feb 13 19:33:36.977666 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:33:37.139198 tar[1577]: linux-amd64/LICENSE Feb 13 19:33:37.139198 tar[1577]: linux-amd64/README.md Feb 13 19:33:37.152093 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:33:37.375345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:37.377737 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:33:37.379273 systemd[1]: Startup finished in 6.121s (kernel) + 3.943s (userspace) = 10.065s. Feb 13 19:33:37.380246 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:37.805291 kubelet[1662]: E0213 19:33:37.805229 1662 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:37.809641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:37.809976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:45.864374 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:33:45.876004 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:40632.service - OpenSSH per-connection server daemon (10.0.0.1:40632). Feb 13 19:33:45.920307 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 40632 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:33:45.922253 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:45.932482 systemd-logind[1555]: New session 1 of user core. Feb 13 19:33:45.934141 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:33:45.947225 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:33:45.961962 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:33:45.976007 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:33:45.979147 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:33:46.084429 systemd[1682]: Queued start job for default target default.target. Feb 13 19:33:46.084876 systemd[1682]: Created slice app.slice - User Application Slice. Feb 13 19:33:46.084899 systemd[1682]: Reached target paths.target - Paths. Feb 13 19:33:46.084911 systemd[1682]: Reached target timers.target - Timers. Feb 13 19:33:46.094885 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:33:46.101050 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:33:46.101118 systemd[1682]: Reached target sockets.target - Sockets. Feb 13 19:33:46.101133 systemd[1682]: Reached target basic.target - Basic System. Feb 13 19:33:46.101170 systemd[1682]: Reached target default.target - Main User Target. Feb 13 19:33:46.101200 systemd[1682]: Startup finished in 115ms. Feb 13 19:33:46.101823 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:33:46.103352 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:33:46.161115 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:40636.service - OpenSSH per-connection server daemon (10.0.0.1:40636). Feb 13 19:33:46.204109 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 40636 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:33:46.205417 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:46.209540 systemd-logind[1555]: New session 2 of user core. Feb 13 19:33:46.219033 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:33:46.272020 sshd[1697]: Connection closed by 10.0.0.1 port 40636 Feb 13 19:33:46.272360 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:46.281083 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:40642.service - OpenSSH per-connection server daemon (10.0.0.1:40642). Feb 13 19:33:46.281561 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:40636.service: Deactivated successfully. Feb 13 19:33:46.283929 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:33:46.284975 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:33:46.286147 systemd-logind[1555]: Removed session 2. Feb 13 19:33:46.321681 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 40642 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:33:46.323615 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:46.327912 systemd-logind[1555]: New session 3 of user core. Feb 13 19:33:46.338040 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:33:46.388355 sshd[1705]: Connection closed by 10.0.0.1 port 40642 Feb 13 19:33:46.388914 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:46.398051 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:40644.service - OpenSSH per-connection server daemon (10.0.0.1:40644). Feb 13 19:33:46.398514 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:40642.service: Deactivated successfully. Feb 13 19:33:46.401928 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:33:46.402026 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:33:46.403672 systemd-logind[1555]: Removed session 3. Feb 13 19:33:46.437223 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 40644 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:33:46.438916 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:46.443153 systemd-logind[1555]: New session 4 of user core. Feb 13 19:33:46.452029 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:33:46.506301 sshd[1713]: Connection closed by 10.0.0.1 port 40644 Feb 13 19:33:46.506769 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:46.522063 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:40646.service - OpenSSH per-connection server daemon (10.0.0.1:40646). Feb 13 19:33:46.522547 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:40644.service: Deactivated successfully. Feb 13 19:33:46.525190 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:33:46.526606 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:33:46.527534 systemd-logind[1555]: Removed session 4. Feb 13 19:33:46.562127 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 40646 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:33:46.563461 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:46.567527 systemd-logind[1555]: New session 5 of user core. Feb 13 19:33:46.578048 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:33:46.637142 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:33:46.637512 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:46.661106 sudo[1722]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:46.662751 sshd[1721]: Connection closed by 10.0.0.1 port 40646 Feb 13 19:33:46.663164 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:46.672002 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:40662.service - OpenSSH per-connection server daemon (10.0.0.1:40662). Feb 13 19:33:46.672550 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:40646.service: Deactivated successfully. Feb 13 19:33:46.674668 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:33:46.676063 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:33:46.677016 systemd-logind[1555]: Removed session 5. Feb 13 19:33:46.718190 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 40662 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:33:46.719940 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:46.724247 systemd-logind[1555]: New session 6 of user core. Feb 13 19:33:46.734098 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:33:46.790874 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:33:46.791346 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:46.795686 sudo[1732]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:46.801896 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:33:46.802220 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:46.829114 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:33:46.865986 augenrules[1754]: No rules Feb 13 19:33:46.867845 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:33:46.868225 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:33:46.869679 sudo[1731]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:46.871376 sshd[1730]: Connection closed by 10.0.0.1 port 40662 Feb 13 19:33:46.871723 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:46.884149 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:40676.service - OpenSSH per-connection server daemon (10.0.0.1:40676). Feb 13 19:33:46.884915 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:40662.service: Deactivated successfully. Feb 13 19:33:46.886850 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:33:46.887476 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:33:46.888704 systemd-logind[1555]: Removed session 6. Feb 13 19:33:46.924358 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 40676 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:33:46.926111 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:46.930066 systemd-logind[1555]: New session 7 of user core. Feb 13 19:33:46.940033 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:33:46.994044 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:33:46.994428 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:47.257987 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:33:47.259305 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:33:47.505052 dockerd[1787]: time="2025-02-13T19:33:47.504981477Z" level=info msg="Starting up" Feb 13 19:33:47.972624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:33:47.978955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:47.995515 dockerd[1787]: time="2025-02-13T19:33:47.995476044Z" level=info msg="Loading containers: start." Feb 13 19:33:48.120495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:48.124968 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:48.303500 kubelet[1887]: E0213 19:33:48.303325 1887 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:48.310151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:48.310418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:48.321824 kernel: Initializing XFRM netlink socket Feb 13 19:33:48.397369 systemd-networkd[1246]: docker0: Link UP Feb 13 19:33:48.434118 dockerd[1787]: time="2025-02-13T19:33:48.434089004Z" level=info msg="Loading containers: done." Feb 13 19:33:48.447679 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck280338191-merged.mount: Deactivated successfully. Feb 13 19:33:48.450306 dockerd[1787]: time="2025-02-13T19:33:48.450263561Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:33:48.450399 dockerd[1787]: time="2025-02-13T19:33:48.450351095Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:33:48.450485 dockerd[1787]: time="2025-02-13T19:33:48.450462704Z" level=info msg="Daemon has completed initialization" Feb 13 19:33:48.486433 dockerd[1787]: time="2025-02-13T19:33:48.486340265Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:33:48.486565 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:33:49.414951 containerd[1579]: time="2025-02-13T19:33:49.414895415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:33:49.973513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168046510.mount: Deactivated successfully. Feb 13 19:33:50.906213 containerd[1579]: time="2025-02-13T19:33:50.906159816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:50.907084 containerd[1579]: time="2025-02-13T19:33:50.907052620Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 19:33:50.908337 containerd[1579]: time="2025-02-13T19:33:50.908301702Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:50.911039 containerd[1579]: time="2025-02-13T19:33:50.911003548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:50.911933 containerd[1579]: time="2025-02-13T19:33:50.911883949Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.496939581s" Feb 13 19:33:50.911933 containerd[1579]: time="2025-02-13T19:33:50.911927180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:33:50.930891 containerd[1579]: time="2025-02-13T19:33:50.930858686Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:33:52.234900 containerd[1579]: time="2025-02-13T19:33:52.234838247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:52.235770 containerd[1579]: time="2025-02-13T19:33:52.235707547Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 19:33:52.236974 containerd[1579]: time="2025-02-13T19:33:52.236943775Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:52.240861 containerd[1579]: time="2025-02-13T19:33:52.240823189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:52.241882 containerd[1579]: time="2025-02-13T19:33:52.241833584Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.310945473s" Feb 13 19:33:52.241882 containerd[1579]: time="2025-02-13T19:33:52.241877276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:33:52.265370 containerd[1579]: time="2025-02-13T19:33:52.265344517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:33:53.375074 containerd[1579]: time="2025-02-13T19:33:53.375014107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:53.376002 containerd[1579]: time="2025-02-13T19:33:53.375967855Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 19:33:53.377289 containerd[1579]: time="2025-02-13T19:33:53.377253466Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:53.380044 containerd[1579]: time="2025-02-13T19:33:53.380010515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:53.381050 containerd[1579]: time="2025-02-13T19:33:53.381013356Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.115640185s" Feb 13 19:33:53.381050 containerd[1579]: time="2025-02-13T19:33:53.381040617Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:33:53.401925 containerd[1579]: time="2025-02-13T19:33:53.401894828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:33:54.294358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488451652.mount: Deactivated successfully. Feb 13 19:33:54.854510 containerd[1579]: time="2025-02-13T19:33:54.854445991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:54.855421 containerd[1579]: time="2025-02-13T19:33:54.855378179Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:33:54.856807 containerd[1579]: time="2025-02-13T19:33:54.856762154Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:54.859094 containerd[1579]: time="2025-02-13T19:33:54.859065503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:54.859663 containerd[1579]: time="2025-02-13T19:33:54.859614813Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.457691361s" Feb 13 19:33:54.859663 containerd[1579]: time="2025-02-13T19:33:54.859656992Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:33:54.881946 containerd[1579]: time="2025-02-13T19:33:54.881910698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:33:55.389618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265339458.mount: Deactivated successfully. Feb 13 19:33:56.024488 containerd[1579]: time="2025-02-13T19:33:56.024429350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:56.025416 containerd[1579]: time="2025-02-13T19:33:56.025364213Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:33:56.026752 containerd[1579]: time="2025-02-13T19:33:56.026691882Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:56.029743 containerd[1579]: time="2025-02-13T19:33:56.029687589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:56.030749 containerd[1579]: time="2025-02-13T19:33:56.030700117Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.148753463s" Feb 13 19:33:56.030749 containerd[1579]: time="2025-02-13T19:33:56.030744891Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:33:56.056500 containerd[1579]: time="2025-02-13T19:33:56.056462542Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:33:56.530680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019499795.mount: Deactivated successfully. Feb 13 19:33:56.536493 containerd[1579]: time="2025-02-13T19:33:56.536452363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:56.537262 containerd[1579]: time="2025-02-13T19:33:56.537203411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 19:33:56.538514 containerd[1579]: time="2025-02-13T19:33:56.538481397Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:56.540611 containerd[1579]: time="2025-02-13T19:33:56.540579892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:56.541294 containerd[1579]: time="2025-02-13T19:33:56.541262782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 484.765305ms" Feb 13 19:33:56.541294 containerd[1579]: time="2025-02-13T19:33:56.541288721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:33:56.563256 containerd[1579]: time="2025-02-13T19:33:56.563218389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:33:57.095026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197867336.mount: Deactivated successfully. Feb 13 19:33:58.560634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:33:58.570109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:58.818099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:58.822704 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:58.983345 kubelet[2233]: E0213 19:33:58.983257 2233 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:58.987692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:58.988025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:59.084896 containerd[1579]: time="2025-02-13T19:33:59.084731916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:59.086313 containerd[1579]: time="2025-02-13T19:33:59.086277514Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 19:33:59.089004 containerd[1579]: time="2025-02-13T19:33:59.088979881Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:59.092771 containerd[1579]: time="2025-02-13T19:33:59.092715736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:59.093901 containerd[1579]: time="2025-02-13T19:33:59.093874439Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.530621083s" Feb 13 19:33:59.093901 containerd[1579]: time="2025-02-13T19:33:59.093900848Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:34:00.991773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:01.006144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:01.023379 systemd[1]: Reloading requested from client PID 2324 ('systemctl') (unit session-7.scope)... Feb 13 19:34:01.023395 systemd[1]: Reloading... Feb 13 19:34:01.105822 zram_generator::config[2364]: No configuration found. Feb 13 19:34:01.287829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:01.367545 systemd[1]: Reloading finished in 343 ms. Feb 13 19:34:01.417878 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:34:01.417980 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:34:01.418332 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:01.421437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:01.570949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:01.587380 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:34:01.627981 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:01.627981 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:34:01.627981 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:01.628884 kubelet[2424]: I0213 19:34:01.628840 2424 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:34:01.939949 kubelet[2424]: I0213 19:34:01.939829 2424 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:34:01.939949 kubelet[2424]: I0213 19:34:01.939856 2424 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:34:01.940094 kubelet[2424]: I0213 19:34:01.940049 2424 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:34:01.956661 kubelet[2424]: I0213 19:34:01.956599 2424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:34:01.957433 kubelet[2424]: E0213 19:34:01.957182 2424 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:01.971521 kubelet[2424]: I0213 19:34:01.971481 2424 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:34:01.971999 kubelet[2424]: I0213 19:34:01.971956 2424 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:34:01.972229 kubelet[2424]: I0213 19:34:01.971997 2424 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:34:01.972877 kubelet[2424]: I0213 19:34:01.972836 2424 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:34:01.972877 kubelet[2424]: I0213 19:34:01.972860 2424 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:34:01.973060 kubelet[2424]: I0213 19:34:01.973034 2424 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:01.973830 kubelet[2424]: I0213 19:34:01.973812 2424 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:34:01.973871 kubelet[2424]: I0213 19:34:01.973831 2424 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:34:01.973871 kubelet[2424]: I0213 19:34:01.973858 2424 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:34:01.973917 kubelet[2424]: I0213 19:34:01.973875 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:34:01.976397 kubelet[2424]: W0213 19:34:01.976144 2424 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:01.976397 kubelet[2424]: E0213 19:34:01.976245 2424 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:01.977419 kubelet[2424]: W0213 19:34:01.977377 2424 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:01.977419 kubelet[2424]: E0213 19:34:01.977423 2424 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:01.978641 kubelet[2424]: I0213 19:34:01.978611 2424 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:34:01.979894 kubelet[2424]: I0213 19:34:01.979870 2424 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:34:01.979951 kubelet[2424]: W0213 19:34:01.979924 2424 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:34:01.980593 kubelet[2424]: I0213 19:34:01.980474 2424 server.go:1264] "Started kubelet" Feb 13 19:34:01.981346 kubelet[2424]: I0213 19:34:01.981262 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:34:01.981951 kubelet[2424]: I0213 19:34:01.981754 2424 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:34:01.981951 kubelet[2424]: I0213 19:34:01.981789 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:34:01.981951 kubelet[2424]: I0213 19:34:01.981816 2424 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:34:01.982709 kubelet[2424]: I0213 19:34:01.982605 2424 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:34:01.983475 kubelet[2424]: E0213 19:34:01.983370 2424 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:01.983475 kubelet[2424]: I0213 19:34:01.983401 2424 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:34:01.983475 kubelet[2424]: I0213 19:34:01.983457 2424 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:34:01.983577 kubelet[2424]: I0213 19:34:01.983517 2424 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:34:01.983777 kubelet[2424]: W0213 19:34:01.983741 2424 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:01.983845 kubelet[2424]: E0213 19:34:01.983780 2424 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:01.984561 kubelet[2424]: E0213 19:34:01.984033 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" Feb 13 19:34:01.985657 kubelet[2424]: I0213 19:34:01.985618 2424 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:34:01.985736 kubelet[2424]: I0213 19:34:01.985718 2424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:34:01.987212 kubelet[2424]: E0213 19:34:01.986878 2424 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:34:01.987212 kubelet[2424]: E0213 19:34:01.987021 2424 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db8435d4e0a9 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:34:01.980453033 +0000 UTC m=+0.388783059,LastTimestamp:2025-02-13 19:34:01.980453033 +0000 UTC m=+0.388783059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:34:01.987337 kubelet[2424]: I0213 19:34:01.987283 2424 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:34:02.000167 kubelet[2424]: I0213 19:34:02.000094 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:34:02.001943 kubelet[2424]: I0213 19:34:02.001915 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:34:02.001943 kubelet[2424]: I0213 19:34:02.001949 2424 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:34:02.002057 kubelet[2424]: I0213 19:34:02.001969 2424 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:34:02.002057 kubelet[2424]: E0213 19:34:02.002016 2424 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:34:02.005224 kubelet[2424]: W0213 19:34:02.004876 2424 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:02.005224 kubelet[2424]: E0213 19:34:02.004928 2424 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:02.014062 kubelet[2424]: I0213 19:34:02.014039 2424 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:34:02.014062 kubelet[2424]: I0213 19:34:02.014057 2424 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:34:02.014175 kubelet[2424]: I0213 19:34:02.014076 2424 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:02.019911 kubelet[2424]: I0213 19:34:02.019879 2424 policy_none.go:49] "None policy: Start" Feb 13 19:34:02.020485 kubelet[2424]: I0213 19:34:02.020453 2424 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:34:02.020485 kubelet[2424]: I0213 19:34:02.020483 2424 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:34:02.028600 kubelet[2424]: I0213 19:34:02.028557 2424 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:34:02.029356 kubelet[2424]: I0213 19:34:02.028755 2424 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:34:02.029356 kubelet[2424]: I0213 19:34:02.028890 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:34:02.030198 kubelet[2424]: E0213 19:34:02.030183 2424 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:34:02.085253 kubelet[2424]: I0213 19:34:02.085221 2424 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:34:02.085504 kubelet[2424]: E0213 19:34:02.085482 2424 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Feb 13 19:34:02.102823 kubelet[2424]: I0213 19:34:02.102747 2424 topology_manager.go:215] "Topology Admit Handler" podUID="241556504c12ae51ad1cbfb31836a3c7" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:34:02.104115 kubelet[2424]: I0213 19:34:02.104085 2424 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:34:02.105282 kubelet[2424]: I0213 19:34:02.105262 2424 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:34:02.184302 kubelet[2424]: I0213 19:34:02.184237 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/241556504c12ae51ad1cbfb31836a3c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"241556504c12ae51ad1cbfb31836a3c7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:02.184302 kubelet[2424]: I0213 19:34:02.184276 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/241556504c12ae51ad1cbfb31836a3c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"241556504c12ae51ad1cbfb31836a3c7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:02.184302 kubelet[2424]: I0213 19:34:02.184299 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.184302 kubelet[2424]: I0213 19:34:02.184315 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.184582 kubelet[2424]: I0213 19:34:02.184332 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.184582 kubelet[2424]: I0213 19:34:02.184347 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/241556504c12ae51ad1cbfb31836a3c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"241556504c12ae51ad1cbfb31836a3c7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:02.184582 kubelet[2424]: I0213 19:34:02.184425 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.184582 kubelet[2424]: I0213 19:34:02.184473 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.184582 kubelet[2424]: I0213 19:34:02.184507 2424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:34:02.184761 kubelet[2424]: E0213 19:34:02.184728 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" Feb 13 19:34:02.287280 kubelet[2424]: I0213 19:34:02.287168 2424 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:34:02.287533 kubelet[2424]: E0213 19:34:02.287509 2424 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Feb 13 19:34:02.410383 kubelet[2424]: E0213 19:34:02.410347 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:02.411083 containerd[1579]: time="2025-02-13T19:34:02.411052647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:241556504c12ae51ad1cbfb31836a3c7,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:02.412302 kubelet[2424]: E0213 19:34:02.412272 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:02.412374 kubelet[2424]: E0213 19:34:02.412338 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:02.412725 containerd[1579]: time="2025-02-13T19:34:02.412696158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:02.412849 containerd[1579]: time="2025-02-13T19:34:02.412779495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:02.585678 kubelet[2424]: E0213 19:34:02.585540 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" Feb 13 19:34:02.689068 kubelet[2424]: I0213 19:34:02.689035 2424 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:34:02.689478 kubelet[2424]: E0213 19:34:02.689313 2424 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Feb 13 19:34:02.888148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915173319.mount: Deactivated successfully. Feb 13 19:34:02.895522 containerd[1579]: time="2025-02-13T19:34:02.895485108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:02.898166 containerd[1579]: time="2025-02-13T19:34:02.898119948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:34:02.899096 containerd[1579]: time="2025-02-13T19:34:02.899057255Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:02.900889 containerd[1579]: time="2025-02-13T19:34:02.900857531Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:02.901639 containerd[1579]: time="2025-02-13T19:34:02.901602868Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:34:02.902530 containerd[1579]: time="2025-02-13T19:34:02.902496644Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:02.903343 containerd[1579]: time="2025-02-13T19:34:02.903297816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:34:02.904185 containerd[1579]: time="2025-02-13T19:34:02.904156667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:02.904928 containerd[1579]: time="2025-02-13T19:34:02.904898448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.147797ms" Feb 13 19:34:02.908114 containerd[1579]: time="2025-02-13T19:34:02.908084271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.190221ms" Feb 13 19:34:02.908888 containerd[1579]: time="2025-02-13T19:34:02.908850057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.707491ms" Feb 13 19:34:03.030782 containerd[1579]: time="2025-02-13T19:34:03.030694695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:03.030916 containerd[1579]: time="2025-02-13T19:34:03.030787239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:03.030916 containerd[1579]: time="2025-02-13T19:34:03.030819089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:03.031194 containerd[1579]: time="2025-02-13T19:34:03.030889581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:03.031194 containerd[1579]: time="2025-02-13T19:34:03.030855206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:03.031194 containerd[1579]: time="2025-02-13T19:34:03.030876416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:03.031194 containerd[1579]: time="2025-02-13T19:34:03.030978317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:03.031667 containerd[1579]: time="2025-02-13T19:34:03.031595845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:03.031873 containerd[1579]: time="2025-02-13T19:34:03.031646320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:03.031873 containerd[1579]: time="2025-02-13T19:34:03.031826027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:03.031873 containerd[1579]: time="2025-02-13T19:34:03.031849070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:03.032927 containerd[1579]: time="2025-02-13T19:34:03.032859364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:03.087296 containerd[1579]: time="2025-02-13T19:34:03.087251389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"95a1273283466597539607df702476908f73ee0a58d923f9a7c3ff6e5b5c1d14\"" Feb 13 19:34:03.089131 kubelet[2424]: E0213 19:34:03.088882 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:03.090931 containerd[1579]: time="2025-02-13T19:34:03.090899449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc467af1d8940937464a8bc728381dab17c5903adb2c4b01ff90a7deedd3495b\"" Feb 13 19:34:03.091105 containerd[1579]: time="2025-02-13T19:34:03.091021558Z" level=info msg="CreateContainer within sandbox \"95a1273283466597539607df702476908f73ee0a58d923f9a7c3ff6e5b5c1d14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:34:03.091272 containerd[1579]: time="2025-02-13T19:34:03.091251800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:241556504c12ae51ad1cbfb31836a3c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"08bbefe4c3bf40155193c7aaf82fff9161b13fc3cd1022b13dfdd610a68fe06a\"" Feb 13 19:34:03.092498 kubelet[2424]: E0213 19:34:03.092479 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:03.093470 kubelet[2424]: E0213 19:34:03.093453 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:03.094301 containerd[1579]: time="2025-02-13T19:34:03.094227239Z" level=info msg="CreateContainer within sandbox \"08bbefe4c3bf40155193c7aaf82fff9161b13fc3cd1022b13dfdd610a68fe06a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:34:03.095334 containerd[1579]: time="2025-02-13T19:34:03.095304769Z" level=info msg="CreateContainer within sandbox \"cc467af1d8940937464a8bc728381dab17c5903adb2c4b01ff90a7deedd3495b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:34:03.117823 containerd[1579]: time="2025-02-13T19:34:03.117754422Z" level=info msg="CreateContainer within sandbox \"95a1273283466597539607df702476908f73ee0a58d923f9a7c3ff6e5b5c1d14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6331cc5a9a3d9bd3b734982dd0365b75655bd1751d2c017463b12bc6ef921b4a\"" Feb 13 19:34:03.118236 containerd[1579]: time="2025-02-13T19:34:03.118215727Z" level=info msg="StartContainer for \"6331cc5a9a3d9bd3b734982dd0365b75655bd1751d2c017463b12bc6ef921b4a\"" Feb 13 19:34:03.122257 containerd[1579]: time="2025-02-13T19:34:03.122226237Z" level=info msg="CreateContainer within sandbox \"08bbefe4c3bf40155193c7aaf82fff9161b13fc3cd1022b13dfdd610a68fe06a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2abce8117bb123fd8fe620ebf8468dfd183d6a475c16889e97ec6aab9beb9ea0\"" Feb 13 19:34:03.122623 containerd[1579]: time="2025-02-13T19:34:03.122602793Z" level=info msg="StartContainer for \"2abce8117bb123fd8fe620ebf8468dfd183d6a475c16889e97ec6aab9beb9ea0\"" Feb 13 19:34:03.124103 containerd[1579]: time="2025-02-13T19:34:03.124068371Z" level=info msg="CreateContainer within sandbox \"cc467af1d8940937464a8bc728381dab17c5903adb2c4b01ff90a7deedd3495b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ec90f1245f76b44bc07442f78258ee837ea5a739d3cb26642606ff32f3b3d8d\"" Feb 13 19:34:03.124410 containerd[1579]: time="2025-02-13T19:34:03.124388101Z" level=info msg="StartContainer for \"8ec90f1245f76b44bc07442f78258ee837ea5a739d3cb26642606ff32f3b3d8d\"" Feb 13 19:34:03.193260 containerd[1579]: time="2025-02-13T19:34:03.193133929Z" level=info msg="StartContainer for \"8ec90f1245f76b44bc07442f78258ee837ea5a739d3cb26642606ff32f3b3d8d\" returns successfully" Feb 13 19:34:03.200941 containerd[1579]: time="2025-02-13T19:34:03.200893518Z" level=info msg="StartContainer for \"2abce8117bb123fd8fe620ebf8468dfd183d6a475c16889e97ec6aab9beb9ea0\" returns successfully" Feb 13 19:34:03.207616 containerd[1579]: time="2025-02-13T19:34:03.207570157Z" level=info msg="StartContainer for \"6331cc5a9a3d9bd3b734982dd0365b75655bd1751d2c017463b12bc6ef921b4a\" returns successfully" Feb 13 19:34:03.238697 kubelet[2424]: W0213 19:34:03.238628 2424 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:03.238697 kubelet[2424]: E0213 19:34:03.238698 2424 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 19:34:03.493004 kubelet[2424]: I0213 19:34:03.492893 2424 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:34:04.010198 kubelet[2424]: E0213 19:34:04.010072 2424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:34:04.017896 kubelet[2424]: E0213 19:34:04.017689 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:04.020368 kubelet[2424]: E0213 19:34:04.020142 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:04.022840 kubelet[2424]: E0213 19:34:04.022828 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:04.207600 kubelet[2424]: I0213 19:34:04.207550 2424 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:34:04.979134 kubelet[2424]: I0213 19:34:04.979094 2424 apiserver.go:52] "Watching apiserver" Feb 13 19:34:04.984226 kubelet[2424]: I0213 19:34:04.984190 2424 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:34:05.038184 kubelet[2424]: E0213 19:34:05.038143 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:06.006356 systemd[1]: Reloading requested from client PID 2702 ('systemctl') (unit session-7.scope)... Feb 13 19:34:06.006371 systemd[1]: Reloading... Feb 13 19:34:06.027886 kubelet[2424]: E0213 19:34:06.027860 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:06.080847 zram_generator::config[2741]: No configuration found. Feb 13 19:34:06.666146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:06.744668 systemd[1]: Reloading finished in 737 ms. Feb 13 19:34:06.780039 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:06.803090 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:34:06.803580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:06.815988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:06.955269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:06.959641 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:34:07.004912 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:07.004912 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:34:07.004912 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:07.005314 kubelet[2796]: I0213 19:34:07.004957 2796 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:34:07.009760 kubelet[2796]: I0213 19:34:07.009712 2796 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:34:07.009760 kubelet[2796]: I0213 19:34:07.009768 2796 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:34:07.010073 kubelet[2796]: I0213 19:34:07.010046 2796 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:34:07.011556 kubelet[2796]: I0213 19:34:07.011531 2796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:34:07.012964 kubelet[2796]: I0213 19:34:07.012867 2796 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:34:07.025162 kubelet[2796]: I0213 19:34:07.025115 2796 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:34:07.026402 kubelet[2796]: I0213 19:34:07.025837 2796 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:34:07.026402 kubelet[2796]: I0213 19:34:07.025878 2796 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:34:07.026402 kubelet[2796]: I0213 19:34:07.026150 2796 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:34:07.026402 kubelet[2796]: I0213 19:34:07.026167 2796 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:34:07.026402 kubelet[2796]: I0213 19:34:07.026223 2796 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:07.026593 kubelet[2796]: I0213 19:34:07.026366 2796 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:34:07.026593 kubelet[2796]: I0213 19:34:07.026386 2796 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:34:07.026593 kubelet[2796]: I0213 19:34:07.026434 2796 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:34:07.026593 kubelet[2796]: I0213 19:34:07.026453 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:34:07.028491 kubelet[2796]: I0213 19:34:07.027677 2796 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:34:07.028491 kubelet[2796]: I0213 19:34:07.027923 2796 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:34:07.029208 kubelet[2796]: I0213 19:34:07.029197 2796 server.go:1264] "Started kubelet" Feb 13 19:34:07.029860 kubelet[2796]: I0213 19:34:07.029716 2796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:34:07.029953 kubelet[2796]: I0213 19:34:07.029910 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:34:07.030681 kubelet[2796]: I0213 19:34:07.030668 2796 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:34:07.030978 kubelet[2796]: I0213 19:34:07.030960 2796 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:34:07.032729 kubelet[2796]: I0213 19:34:07.032712 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:34:07.039471 kubelet[2796]: E0213 19:34:07.039440 2796 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:34:07.039631 kubelet[2796]: I0213 19:34:07.039615 2796 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:34:07.039744 kubelet[2796]: I0213 19:34:07.039731 2796 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:34:07.039900 kubelet[2796]: I0213 19:34:07.039888 2796 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:34:07.040204 kubelet[2796]: I0213 19:34:07.040185 2796 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:34:07.040301 kubelet[2796]: I0213 19:34:07.040276 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:34:07.043299 kubelet[2796]: I0213 19:34:07.043286 2796 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:34:07.053544 kubelet[2796]: I0213 19:34:07.053486 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:34:07.055159 kubelet[2796]: I0213 19:34:07.055125 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:34:07.055159 kubelet[2796]: I0213 19:34:07.055154 2796 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:34:07.055242 kubelet[2796]: I0213 19:34:07.055191 2796 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:34:07.055282 kubelet[2796]: E0213 19:34:07.055241 2796 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:34:07.091224 kubelet[2796]: I0213 19:34:07.091157 2796 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:34:07.091224 kubelet[2796]: I0213 19:34:07.091175 2796 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:34:07.091224 kubelet[2796]: I0213 19:34:07.091192 2796 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:07.091459 kubelet[2796]: I0213 19:34:07.091352 2796 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:34:07.091459 kubelet[2796]: I0213 19:34:07.091362 2796 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:34:07.091459 kubelet[2796]: I0213 19:34:07.091380 2796 policy_none.go:49] "None policy: Start" Feb 13 19:34:07.092098 kubelet[2796]: I0213 19:34:07.092078 2796 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:34:07.092129 kubelet[2796]: I0213 19:34:07.092105 2796 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:34:07.092274 kubelet[2796]: I0213 19:34:07.092254 2796 state_mem.go:75] "Updated machine memory state" Feb 13 19:34:07.093765 kubelet[2796]: I0213 19:34:07.093725 2796 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:34:07.095688 kubelet[2796]: I0213 19:34:07.094834 2796 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:34:07.095688 kubelet[2796]: I0213 19:34:07.094937 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:34:07.146682 kubelet[2796]: I0213 19:34:07.146651 2796 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:34:07.151994 kubelet[2796]: I0213 19:34:07.151968 2796 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:34:07.152053 kubelet[2796]: I0213 19:34:07.152047 2796 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:34:07.155965 kubelet[2796]: I0213 19:34:07.155917 2796 topology_manager.go:215] "Topology Admit Handler" podUID="241556504c12ae51ad1cbfb31836a3c7" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:34:07.156075 kubelet[2796]: I0213 19:34:07.156019 2796 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:34:07.156100 kubelet[2796]: I0213 19:34:07.156076 2796 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:34:07.162082 kubelet[2796]: E0213 19:34:07.162060 2796 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:07.241458 kubelet[2796]: I0213 19:34:07.241372 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/241556504c12ae51ad1cbfb31836a3c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"241556504c12ae51ad1cbfb31836a3c7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:07.241458 kubelet[2796]: I0213 19:34:07.241412 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/241556504c12ae51ad1cbfb31836a3c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"241556504c12ae51ad1cbfb31836a3c7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:07.241458 kubelet[2796]: I0213 19:34:07.241437 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:07.241458 kubelet[2796]: I0213 19:34:07.241453 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:07.241571 kubelet[2796]: I0213 19:34:07.241472 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:34:07.241571 kubelet[2796]: I0213 19:34:07.241489 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/241556504c12ae51ad1cbfb31836a3c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"241556504c12ae51ad1cbfb31836a3c7\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:07.241571 kubelet[2796]: I0213 19:34:07.241504 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:07.241571 kubelet[2796]: I0213 19:34:07.241519 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:07.241571 kubelet[2796]: I0213 19:34:07.241541 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:07.460526 kubelet[2796]: E0213 19:34:07.460419 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:07.461722 kubelet[2796]: E0213 19:34:07.461665 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:07.462793 kubelet[2796]: E0213 19:34:07.462719 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.028098 kubelet[2796]: I0213 19:34:08.028052 2796 apiserver.go:52] "Watching apiserver" Feb 13 19:34:08.041041 kubelet[2796]: I0213 19:34:08.040975 2796 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:34:08.066621 kubelet[2796]: E0213 19:34:08.066576 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.067395 kubelet[2796]: E0213 19:34:08.067372 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.073391 kubelet[2796]: E0213 19:34:08.073342 2796 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:08.074343 kubelet[2796]: E0213 19:34:08.073870 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.093682 kubelet[2796]: I0213 19:34:08.093588 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.093569034 podStartE2EDuration="3.093569034s" podCreationTimestamp="2025-02-13 19:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:08.092641705 +0000 UTC m=+1.128867407" watchObservedRunningTime="2025-02-13 19:34:08.093569034 +0000 UTC m=+1.129794736" Feb 13 19:34:08.093873 kubelet[2796]: I0213 19:34:08.093739 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.093734083 podStartE2EDuration="1.093734083s" podCreationTimestamp="2025-02-13 19:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:08.084711175 +0000 UTC m=+1.120936877" watchObservedRunningTime="2025-02-13 19:34:08.093734083 +0000 UTC m=+1.129959785" Feb 13 19:34:08.103099 kubelet[2796]: I0213 19:34:08.103045 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.103037487 podStartE2EDuration="1.103037487s" podCreationTimestamp="2025-02-13 19:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:08.102427082 +0000 UTC m=+1.138652774" watchObservedRunningTime="2025-02-13 19:34:08.103037487 +0000 UTC m=+1.139263189" Feb 13 19:34:09.067719 kubelet[2796]: E0213 19:34:09.067690 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:09.188604 kubelet[2796]: E0213 19:34:09.188552 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:11.129665 sudo[1767]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:11.131306 sshd[1766]: Connection closed by 10.0.0.1 port 40676 Feb 13 19:34:11.131785 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:11.135735 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:40676.service: Deactivated successfully. Feb 13 19:34:11.139074 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:34:11.139272 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:34:11.141051 systemd-logind[1555]: Removed session 7. Feb 13 19:34:13.035956 kubelet[2796]: E0213 19:34:13.035919 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:13.073332 kubelet[2796]: E0213 19:34:13.073280 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:15.384247 kubelet[2796]: E0213 19:34:15.384202 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:16.077339 kubelet[2796]: E0213 19:34:16.077309 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:19.192312 kubelet[2796]: E0213 19:34:19.192271 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:19.623947 kubelet[2796]: I0213 19:34:19.623916 2796 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:34:19.624314 containerd[1579]: time="2025-02-13T19:34:19.624277833Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:34:19.624677 kubelet[2796]: I0213 19:34:19.624585 2796 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:34:20.491164 kubelet[2796]: I0213 19:34:20.491115 2796 topology_manager.go:215] "Topology Admit Handler" podUID="66b28003-46f6-4d96-b37d-cd2550acbe99" podNamespace="kube-system" podName="kube-proxy-hrh4r" Feb 13 19:34:20.522885 kubelet[2796]: I0213 19:34:20.522840 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66b28003-46f6-4d96-b37d-cd2550acbe99-xtables-lock\") pod \"kube-proxy-hrh4r\" (UID: \"66b28003-46f6-4d96-b37d-cd2550acbe99\") " pod="kube-system/kube-proxy-hrh4r" Feb 13 19:34:20.522885 kubelet[2796]: I0213 19:34:20.522889 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfln7\" (UniqueName: \"kubernetes.io/projected/66b28003-46f6-4d96-b37d-cd2550acbe99-kube-api-access-wfln7\") pod \"kube-proxy-hrh4r\" (UID: \"66b28003-46f6-4d96-b37d-cd2550acbe99\") " pod="kube-system/kube-proxy-hrh4r" Feb 13 19:34:20.523086 kubelet[2796]: I0213 19:34:20.522916 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66b28003-46f6-4d96-b37d-cd2550acbe99-kube-proxy\") pod \"kube-proxy-hrh4r\" (UID: \"66b28003-46f6-4d96-b37d-cd2550acbe99\") " pod="kube-system/kube-proxy-hrh4r" Feb 13 19:34:20.523086 kubelet[2796]: I0213 19:34:20.522934 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66b28003-46f6-4d96-b37d-cd2550acbe99-lib-modules\") pod \"kube-proxy-hrh4r\" (UID: \"66b28003-46f6-4d96-b37d-cd2550acbe99\") " pod="kube-system/kube-proxy-hrh4r" Feb 13 19:34:20.658504 kubelet[2796]: I0213 19:34:20.658453 2796 topology_manager.go:215] "Topology Admit Handler" podUID="1ed59d07-340b-4f38-8c69-09e075282e24" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-8n8j4" Feb 13 19:34:20.725240 kubelet[2796]: I0213 19:34:20.725201 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbp8g\" (UniqueName: \"kubernetes.io/projected/1ed59d07-340b-4f38-8c69-09e075282e24-kube-api-access-cbp8g\") pod \"tigera-operator-7bc55997bb-8n8j4\" (UID: \"1ed59d07-340b-4f38-8c69-09e075282e24\") " pod="tigera-operator/tigera-operator-7bc55997bb-8n8j4" Feb 13 19:34:20.725240 kubelet[2796]: I0213 19:34:20.725241 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ed59d07-340b-4f38-8c69-09e075282e24-var-lib-calico\") pod \"tigera-operator-7bc55997bb-8n8j4\" (UID: \"1ed59d07-340b-4f38-8c69-09e075282e24\") " pod="tigera-operator/tigera-operator-7bc55997bb-8n8j4" Feb 13 19:34:20.795260 kubelet[2796]: E0213 19:34:20.795102 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:20.795925 containerd[1579]: time="2025-02-13T19:34:20.795875670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrh4r,Uid:66b28003-46f6-4d96-b37d-cd2550acbe99,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:20.819438 containerd[1579]: time="2025-02-13T19:34:20.819327142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:20.820472 containerd[1579]: time="2025-02-13T19:34:20.820123618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:20.820472 containerd[1579]: time="2025-02-13T19:34:20.820211706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:20.820472 containerd[1579]: time="2025-02-13T19:34:20.820389935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:20.857040 containerd[1579]: time="2025-02-13T19:34:20.856997965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrh4r,Uid:66b28003-46f6-4d96-b37d-cd2550acbe99,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b23af984ca9eeae60e185708b9a0b5aceb316ea3af8f7018c16e8e232a69bfd\"" Feb 13 19:34:20.858443 kubelet[2796]: E0213 19:34:20.858200 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:20.860906 containerd[1579]: time="2025-02-13T19:34:20.860856940Z" level=info msg="CreateContainer within sandbox \"4b23af984ca9eeae60e185708b9a0b5aceb316ea3af8f7018c16e8e232a69bfd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:34:20.878767 containerd[1579]: time="2025-02-13T19:34:20.878705990Z" level=info msg="CreateContainer within sandbox \"4b23af984ca9eeae60e185708b9a0b5aceb316ea3af8f7018c16e8e232a69bfd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a90d235b4522dbf19389620be24a2cc045443ad1e07b7edc0db75b204f09f3d\"" Feb 13 19:34:20.879279 containerd[1579]: time="2025-02-13T19:34:20.879249634Z" level=info msg="StartContainer for \"9a90d235b4522dbf19389620be24a2cc045443ad1e07b7edc0db75b204f09f3d\"" Feb 13 19:34:20.947872 containerd[1579]: time="2025-02-13T19:34:20.947785017Z" level=info msg="StartContainer for \"9a90d235b4522dbf19389620be24a2cc045443ad1e07b7edc0db75b204f09f3d\" returns successfully" Feb 13 19:34:20.964424 containerd[1579]: time="2025-02-13T19:34:20.964352788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8n8j4,Uid:1ed59d07-340b-4f38-8c69-09e075282e24,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:34:20.996850 containerd[1579]: time="2025-02-13T19:34:20.996705871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:20.996850 containerd[1579]: time="2025-02-13T19:34:20.996785222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:20.997768 containerd[1579]: time="2025-02-13T19:34:20.997547904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:20.997768 containerd[1579]: time="2025-02-13T19:34:20.997664776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:21.056763 containerd[1579]: time="2025-02-13T19:34:21.056563912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8n8j4,Uid:1ed59d07-340b-4f38-8c69-09e075282e24,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2daa9fc0f6c83befdad5320618ed14425d3f01c31657b9d419f86ff9b54d2afa\"" Feb 13 19:34:21.058691 containerd[1579]: time="2025-02-13T19:34:21.058673353Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:34:21.085913 kubelet[2796]: E0213 19:34:21.085846 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:21.094125 kubelet[2796]: I0213 19:34:21.094032 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hrh4r" podStartSLOduration=1.094013668 podStartE2EDuration="1.094013668s" podCreationTimestamp="2025-02-13 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:21.093842413 +0000 UTC m=+14.130068105" watchObservedRunningTime="2025-02-13 19:34:21.094013668 +0000 UTC m=+14.130239370" Feb 13 19:34:22.326140 update_engine[1557]: I20250213 19:34:22.326027 1557 update_attempter.cc:509] Updating boot flags... Feb 13 19:34:22.349849 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3084) Feb 13 19:34:22.377842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3138) Feb 13 19:34:22.588966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211794852.mount: Deactivated successfully. Feb 13 19:34:22.860903 containerd[1579]: time="2025-02-13T19:34:22.860861243Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.861700 containerd[1579]: time="2025-02-13T19:34:22.861663578Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:34:22.862816 containerd[1579]: time="2025-02-13T19:34:22.862777504Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.865137 containerd[1579]: time="2025-02-13T19:34:22.865073456Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.865658 containerd[1579]: time="2025-02-13T19:34:22.865609915Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.806642273s" Feb 13 19:34:22.865658 containerd[1579]: time="2025-02-13T19:34:22.865650131Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:34:22.869520 containerd[1579]: time="2025-02-13T19:34:22.869418460Z" level=info msg="CreateContainer within sandbox \"2daa9fc0f6c83befdad5320618ed14425d3f01c31657b9d419f86ff9b54d2afa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:34:22.883271 containerd[1579]: time="2025-02-13T19:34:22.883232784Z" level=info msg="CreateContainer within sandbox \"2daa9fc0f6c83befdad5320618ed14425d3f01c31657b9d419f86ff9b54d2afa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bdaeec57e9b1451601e0fce72200826bd00ea34328f03070475590d7638e7656\"" Feb 13 19:34:22.883915 containerd[1579]: time="2025-02-13T19:34:22.883680845Z" level=info msg="StartContainer for \"bdaeec57e9b1451601e0fce72200826bd00ea34328f03070475590d7638e7656\"" Feb 13 19:34:22.933493 containerd[1579]: time="2025-02-13T19:34:22.933449653Z" level=info msg="StartContainer for \"bdaeec57e9b1451601e0fce72200826bd00ea34328f03070475590d7638e7656\" returns successfully" Feb 13 19:34:23.143443 kubelet[2796]: I0213 19:34:23.143038 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-8n8j4" podStartSLOduration=1.332970587 podStartE2EDuration="3.143020905s" podCreationTimestamp="2025-02-13 19:34:20 +0000 UTC" firstStartedPulling="2025-02-13 19:34:21.057880135 +0000 UTC m=+14.094105837" lastFinishedPulling="2025-02-13 19:34:22.867930453 +0000 UTC m=+15.904156155" observedRunningTime="2025-02-13 19:34:23.142058588 +0000 UTC m=+16.178284290" watchObservedRunningTime="2025-02-13 19:34:23.143020905 +0000 UTC m=+16.179246607" Feb 13 19:34:26.266838 kubelet[2796]: I0213 19:34:26.266412 2796 topology_manager.go:215] "Topology Admit Handler" podUID="62836c67-8897-4c9d-807d-6b89362bd73f" podNamespace="calico-system" podName="calico-typha-fd5668d57-xgh97" Feb 13 19:34:26.369110 kubelet[2796]: I0213 19:34:26.369054 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/62836c67-8897-4c9d-807d-6b89362bd73f-typha-certs\") pod \"calico-typha-fd5668d57-xgh97\" (UID: \"62836c67-8897-4c9d-807d-6b89362bd73f\") " pod="calico-system/calico-typha-fd5668d57-xgh97" Feb 13 19:34:26.369110 kubelet[2796]: I0213 19:34:26.369105 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgrwv\" (UniqueName: \"kubernetes.io/projected/62836c67-8897-4c9d-807d-6b89362bd73f-kube-api-access-vgrwv\") pod \"calico-typha-fd5668d57-xgh97\" (UID: \"62836c67-8897-4c9d-807d-6b89362bd73f\") " pod="calico-system/calico-typha-fd5668d57-xgh97" Feb 13 19:34:26.369110 kubelet[2796]: I0213 19:34:26.369126 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62836c67-8897-4c9d-807d-6b89362bd73f-tigera-ca-bundle\") pod \"calico-typha-fd5668d57-xgh97\" (UID: \"62836c67-8897-4c9d-807d-6b89362bd73f\") " pod="calico-system/calico-typha-fd5668d57-xgh97" Feb 13 19:34:26.458491 kubelet[2796]: I0213 19:34:26.458365 2796 topology_manager.go:215] "Topology Admit Handler" podUID="1157e559-65ca-4388-8be5-d109dfc71dc1" podNamespace="calico-system" podName="calico-node-c7tpt" Feb 13 19:34:26.470009 kubelet[2796]: I0213 19:34:26.469969 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-lib-modules\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470009 kubelet[2796]: I0213 19:34:26.470015 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-cni-log-dir\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470181 kubelet[2796]: I0213 19:34:26.470039 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1157e559-65ca-4388-8be5-d109dfc71dc1-node-certs\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470181 kubelet[2796]: I0213 19:34:26.470061 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-flexvol-driver-host\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470181 kubelet[2796]: I0213 19:34:26.470094 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtvz\" (UniqueName: \"kubernetes.io/projected/1157e559-65ca-4388-8be5-d109dfc71dc1-kube-api-access-6gtvz\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470271 kubelet[2796]: I0213 19:34:26.470171 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-var-lib-calico\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470271 kubelet[2796]: I0213 19:34:26.470212 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-cni-net-dir\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470271 kubelet[2796]: I0213 19:34:26.470267 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-xtables-lock\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470370 kubelet[2796]: I0213 19:34:26.470283 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-var-run-calico\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470370 kubelet[2796]: I0213 19:34:26.470320 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1157e559-65ca-4388-8be5-d109dfc71dc1-tigera-ca-bundle\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470370 kubelet[2796]: I0213 19:34:26.470335 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-cni-bin-dir\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.470471 kubelet[2796]: I0213 19:34:26.470420 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1157e559-65ca-4388-8be5-d109dfc71dc1-policysync\") pod \"calico-node-c7tpt\" (UID: \"1157e559-65ca-4388-8be5-d109dfc71dc1\") " pod="calico-system/calico-node-c7tpt" Feb 13 19:34:26.572586 kubelet[2796]: E0213 19:34:26.572293 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:26.573268 containerd[1579]: time="2025-02-13T19:34:26.573057829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd5668d57-xgh97,Uid:62836c67-8897-4c9d-807d-6b89362bd73f,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:26.574046 kubelet[2796]: E0213 19:34:26.573527 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.574046 kubelet[2796]: W0213 19:34:26.573549 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.574046 kubelet[2796]: E0213 19:34:26.573576 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.574870 kubelet[2796]: E0213 19:34:26.574764 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.574870 kubelet[2796]: W0213 19:34:26.574780 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.574870 kubelet[2796]: E0213 19:34:26.574796 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.575645 kubelet[2796]: E0213 19:34:26.575609 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.575645 kubelet[2796]: W0213 19:34:26.575640 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.575735 kubelet[2796]: E0213 19:34:26.575660 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.678824 kubelet[2796]: E0213 19:34:26.674934 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.678824 kubelet[2796]: W0213 19:34:26.674961 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.678824 kubelet[2796]: E0213 19:34:26.674985 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.680834 kubelet[2796]: I0213 19:34:26.679745 2796 topology_manager.go:215] "Topology Admit Handler" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" podNamespace="calico-system" podName="csi-node-driver-dwgr8" Feb 13 19:34:26.682816 kubelet[2796]: E0213 19:34:26.681290 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:26.682816 kubelet[2796]: E0213 19:34:26.680987 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.682816 kubelet[2796]: W0213 19:34:26.681328 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.682816 kubelet[2796]: E0213 19:34:26.681342 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.684263 kubelet[2796]: E0213 19:34:26.684226 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.684263 kubelet[2796]: W0213 19:34:26.684256 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.684344 kubelet[2796]: E0213 19:34:26.684281 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.688063 kubelet[2796]: E0213 19:34:26.688030 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.688063 kubelet[2796]: W0213 19:34:26.688055 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.688855 kubelet[2796]: E0213 19:34:26.688833 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.689203 kubelet[2796]: E0213 19:34:26.689174 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.689252 kubelet[2796]: W0213 19:34:26.689200 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.689252 kubelet[2796]: E0213 19:34:26.689226 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.689566 kubelet[2796]: E0213 19:34:26.689551 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.689566 kubelet[2796]: W0213 19:34:26.689563 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.689615 kubelet[2796]: E0213 19:34:26.689571 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.690037 kubelet[2796]: E0213 19:34:26.690019 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.690037 kubelet[2796]: W0213 19:34:26.690035 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.690113 kubelet[2796]: E0213 19:34:26.690048 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.766750 containerd[1579]: time="2025-02-13T19:34:26.765871800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:26.766750 containerd[1579]: time="2025-02-13T19:34:26.766034900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:26.766750 containerd[1579]: time="2025-02-13T19:34:26.766056491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:26.766750 containerd[1579]: time="2025-02-13T19:34:26.766679150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:26.768184 kubelet[2796]: E0213 19:34:26.767641 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:26.772819 containerd[1579]: time="2025-02-13T19:34:26.770761783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c7tpt,Uid:1157e559-65ca-4388-8be5-d109dfc71dc1,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:26.775541 kubelet[2796]: E0213 19:34:26.775507 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.775541 kubelet[2796]: W0213 19:34:26.775537 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.775840 kubelet[2796]: E0213 19:34:26.775578 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.776945 kubelet[2796]: E0213 19:34:26.775983 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.776945 kubelet[2796]: W0213 19:34:26.775995 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.776945 kubelet[2796]: E0213 19:34:26.776005 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.776945 kubelet[2796]: E0213 19:34:26.776281 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.776945 kubelet[2796]: W0213 19:34:26.776291 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.776945 kubelet[2796]: E0213 19:34:26.776300 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.776945 kubelet[2796]: E0213 19:34:26.776681 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.776945 kubelet[2796]: W0213 19:34:26.776689 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.776945 kubelet[2796]: E0213 19:34:26.776698 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.777164 kubelet[2796]: E0213 19:34:26.776997 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.777164 kubelet[2796]: W0213 19:34:26.777006 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.777164 kubelet[2796]: E0213 19:34:26.777015 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.778344 kubelet[2796]: E0213 19:34:26.777379 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.778344 kubelet[2796]: W0213 19:34:26.777389 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.778344 kubelet[2796]: E0213 19:34:26.777398 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.778344 kubelet[2796]: E0213 19:34:26.777921 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.778344 kubelet[2796]: W0213 19:34:26.777930 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.778344 kubelet[2796]: E0213 19:34:26.778036 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.778471 kubelet[2796]: E0213 19:34:26.778410 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.778471 kubelet[2796]: W0213 19:34:26.778419 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.778471 kubelet[2796]: E0213 19:34:26.778428 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782161 kubelet[2796]: E0213 19:34:26.780005 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782161 kubelet[2796]: W0213 19:34:26.780018 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782161 kubelet[2796]: E0213 19:34:26.780052 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782161 kubelet[2796]: E0213 19:34:26.780258 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782161 kubelet[2796]: W0213 19:34:26.780265 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782161 kubelet[2796]: E0213 19:34:26.780289 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782161 kubelet[2796]: E0213 19:34:26.780493 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782161 kubelet[2796]: W0213 19:34:26.780500 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782161 kubelet[2796]: E0213 19:34:26.780509 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782161 kubelet[2796]: E0213 19:34:26.780729 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782442 kubelet[2796]: W0213 19:34:26.780736 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782442 kubelet[2796]: E0213 19:34:26.780744 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782442 kubelet[2796]: E0213 19:34:26.781086 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782442 kubelet[2796]: W0213 19:34:26.781094 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782442 kubelet[2796]: E0213 19:34:26.781102 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782442 kubelet[2796]: E0213 19:34:26.781313 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782442 kubelet[2796]: W0213 19:34:26.781321 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782442 kubelet[2796]: E0213 19:34:26.781329 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782442 kubelet[2796]: E0213 19:34:26.781614 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782442 kubelet[2796]: W0213 19:34:26.781622 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782675 kubelet[2796]: E0213 19:34:26.781630 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782675 kubelet[2796]: E0213 19:34:26.781840 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782675 kubelet[2796]: W0213 19:34:26.781848 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782675 kubelet[2796]: E0213 19:34:26.781958 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782675 kubelet[2796]: E0213 19:34:26.782309 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782675 kubelet[2796]: W0213 19:34:26.782320 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782675 kubelet[2796]: E0213 19:34:26.782331 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.782675 kubelet[2796]: E0213 19:34:26.782624 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.782675 kubelet[2796]: W0213 19:34:26.782633 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.782675 kubelet[2796]: E0213 19:34:26.782642 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.784684 kubelet[2796]: E0213 19:34:26.783939 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.784684 kubelet[2796]: W0213 19:34:26.783952 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.784684 kubelet[2796]: E0213 19:34:26.783962 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.784684 kubelet[2796]: E0213 19:34:26.784250 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.784684 kubelet[2796]: W0213 19:34:26.784258 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.784684 kubelet[2796]: E0213 19:34:26.784267 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.785020 kubelet[2796]: E0213 19:34:26.784752 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.785020 kubelet[2796]: W0213 19:34:26.784761 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.785020 kubelet[2796]: E0213 19:34:26.784870 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.785020 kubelet[2796]: I0213 19:34:26.784902 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/58082a0b-a7e3-4696-a1fa-c41d6d0bc84c-varrun\") pod \"csi-node-driver-dwgr8\" (UID: \"58082a0b-a7e3-4696-a1fa-c41d6d0bc84c\") " pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:26.785350 kubelet[2796]: E0213 19:34:26.785321 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.785350 kubelet[2796]: W0213 19:34:26.785336 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.785410 kubelet[2796]: E0213 19:34:26.785388 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.785410 kubelet[2796]: I0213 19:34:26.785405 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58082a0b-a7e3-4696-a1fa-c41d6d0bc84c-socket-dir\") pod \"csi-node-driver-dwgr8\" (UID: \"58082a0b-a7e3-4696-a1fa-c41d6d0bc84c\") " pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:26.785976 kubelet[2796]: E0213 19:34:26.785954 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.785976 kubelet[2796]: W0213 19:34:26.785970 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.786034 kubelet[2796]: E0213 19:34:26.786020 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.786066 kubelet[2796]: I0213 19:34:26.786041 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlwx\" (UniqueName: \"kubernetes.io/projected/58082a0b-a7e3-4696-a1fa-c41d6d0bc84c-kube-api-access-wzlwx\") pod \"csi-node-driver-dwgr8\" (UID: \"58082a0b-a7e3-4696-a1fa-c41d6d0bc84c\") " pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:26.786537 kubelet[2796]: E0213 19:34:26.786513 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.786537 kubelet[2796]: W0213 19:34:26.786528 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.786582 kubelet[2796]: E0213 19:34:26.786569 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.786888 kubelet[2796]: E0213 19:34:26.786868 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.786888 kubelet[2796]: W0213 19:34:26.786881 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.786962 kubelet[2796]: E0213 19:34:26.786946 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.787607 kubelet[2796]: E0213 19:34:26.787571 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.787607 kubelet[2796]: W0213 19:34:26.787588 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.787682 kubelet[2796]: E0213 19:34:26.787639 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.787826 kubelet[2796]: E0213 19:34:26.787811 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.787826 kubelet[2796]: W0213 19:34:26.787824 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.787942 kubelet[2796]: E0213 19:34:26.787918 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.787971 kubelet[2796]: I0213 19:34:26.787942 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58082a0b-a7e3-4696-a1fa-c41d6d0bc84c-kubelet-dir\") pod \"csi-node-driver-dwgr8\" (UID: \"58082a0b-a7e3-4696-a1fa-c41d6d0bc84c\") " pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:26.788014 kubelet[2796]: E0213 19:34:26.788001 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.788014 kubelet[2796]: W0213 19:34:26.788012 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.788112 kubelet[2796]: E0213 19:34:26.788090 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.788183 kubelet[2796]: E0213 19:34:26.788170 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.788183 kubelet[2796]: W0213 19:34:26.788180 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.788239 kubelet[2796]: E0213 19:34:26.788189 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.788371 kubelet[2796]: E0213 19:34:26.788357 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.788371 kubelet[2796]: W0213 19:34:26.788369 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.788434 kubelet[2796]: E0213 19:34:26.788380 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.789410 kubelet[2796]: E0213 19:34:26.789341 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.789410 kubelet[2796]: W0213 19:34:26.789357 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.789410 kubelet[2796]: E0213 19:34:26.789369 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.790119 kubelet[2796]: E0213 19:34:26.789690 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.790119 kubelet[2796]: W0213 19:34:26.789699 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.790119 kubelet[2796]: E0213 19:34:26.789708 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.790119 kubelet[2796]: E0213 19:34:26.790092 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.790119 kubelet[2796]: W0213 19:34:26.790100 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.790119 kubelet[2796]: E0213 19:34:26.790109 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.790262 kubelet[2796]: I0213 19:34:26.790127 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58082a0b-a7e3-4696-a1fa-c41d6d0bc84c-registration-dir\") pod \"csi-node-driver-dwgr8\" (UID: \"58082a0b-a7e3-4696-a1fa-c41d6d0bc84c\") " pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:26.790410 kubelet[2796]: E0213 19:34:26.790347 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.790410 kubelet[2796]: W0213 19:34:26.790360 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.790410 kubelet[2796]: E0213 19:34:26.790369 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.790870 kubelet[2796]: E0213 19:34:26.790844 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.790870 kubelet[2796]: W0213 19:34:26.790862 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.790932 kubelet[2796]: E0213 19:34:26.790874 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.831211 containerd[1579]: time="2025-02-13T19:34:26.831004730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd5668d57-xgh97,Uid:62836c67-8897-4c9d-807d-6b89362bd73f,Namespace:calico-system,Attempt:0,} returns sandbox id \"10ce1f74f9926f32e2cfe177c3eb25f8e463ec376e15449f9e9df929bcf26f88\"" Feb 13 19:34:26.831736 kubelet[2796]: E0213 19:34:26.831708 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:26.832915 containerd[1579]: time="2025-02-13T19:34:26.832794530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:34:26.890814 kubelet[2796]: E0213 19:34:26.890783 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.890814 kubelet[2796]: W0213 19:34:26.890819 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.890957 kubelet[2796]: E0213 19:34:26.890839 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.891084 kubelet[2796]: E0213 19:34:26.891059 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.891084 kubelet[2796]: W0213 19:34:26.891070 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.891157 kubelet[2796]: E0213 19:34:26.891093 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.891408 kubelet[2796]: E0213 19:34:26.891385 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.891452 kubelet[2796]: W0213 19:34:26.891409 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.891452 kubelet[2796]: E0213 19:34:26.891433 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.891636 kubelet[2796]: E0213 19:34:26.891614 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.891636 kubelet[2796]: W0213 19:34:26.891627 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.891636 kubelet[2796]: E0213 19:34:26.891641 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.891861 kubelet[2796]: E0213 19:34:26.891848 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.891861 kubelet[2796]: W0213 19:34:26.891859 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.891918 kubelet[2796]: E0213 19:34:26.891873 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.892195 kubelet[2796]: E0213 19:34:26.892163 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.892195 kubelet[2796]: W0213 19:34:26.892188 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.892249 kubelet[2796]: E0213 19:34:26.892216 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.892454 kubelet[2796]: E0213 19:34:26.892431 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.892454 kubelet[2796]: W0213 19:34:26.892442 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.892588 kubelet[2796]: E0213 19:34:26.892477 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.892682 kubelet[2796]: E0213 19:34:26.892666 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.892682 kubelet[2796]: W0213 19:34:26.892676 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.892740 kubelet[2796]: E0213 19:34:26.892702 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.892923 kubelet[2796]: E0213 19:34:26.892910 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.892923 kubelet[2796]: W0213 19:34:26.892920 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.892985 kubelet[2796]: E0213 19:34:26.892933 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.893131 kubelet[2796]: E0213 19:34:26.893117 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.893131 kubelet[2796]: W0213 19:34:26.893129 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.893247 kubelet[2796]: E0213 19:34:26.893143 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.893313 kubelet[2796]: E0213 19:34:26.893301 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.893313 kubelet[2796]: W0213 19:34:26.893311 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.893358 kubelet[2796]: E0213 19:34:26.893324 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.893500 kubelet[2796]: E0213 19:34:26.893475 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.893500 kubelet[2796]: W0213 19:34:26.893486 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.893544 kubelet[2796]: E0213 19:34:26.893502 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.893774 kubelet[2796]: E0213 19:34:26.893759 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.893774 kubelet[2796]: W0213 19:34:26.893770 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.893870 kubelet[2796]: E0213 19:34:26.893783 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.894052 kubelet[2796]: E0213 19:34:26.894037 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.894052 kubelet[2796]: W0213 19:34:26.894049 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.894109 kubelet[2796]: E0213 19:34:26.894064 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.894290 kubelet[2796]: E0213 19:34:26.894276 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.894290 kubelet[2796]: W0213 19:34:26.894286 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.894342 kubelet[2796]: E0213 19:34:26.894309 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.894474 kubelet[2796]: E0213 19:34:26.894462 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.894474 kubelet[2796]: W0213 19:34:26.894472 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.894559 kubelet[2796]: E0213 19:34:26.894491 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.894689 kubelet[2796]: E0213 19:34:26.894673 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.894689 kubelet[2796]: W0213 19:34:26.894686 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.894742 kubelet[2796]: E0213 19:34:26.894702 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.894946 kubelet[2796]: E0213 19:34:26.894929 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.894946 kubelet[2796]: W0213 19:34:26.894943 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.895003 kubelet[2796]: E0213 19:34:26.894970 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.895174 kubelet[2796]: E0213 19:34:26.895161 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.895174 kubelet[2796]: W0213 19:34:26.895172 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.895219 kubelet[2796]: E0213 19:34:26.895184 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.895363 kubelet[2796]: E0213 19:34:26.895348 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.895363 kubelet[2796]: W0213 19:34:26.895361 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.895404 kubelet[2796]: E0213 19:34:26.895375 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.895557 kubelet[2796]: E0213 19:34:26.895543 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.895557 kubelet[2796]: W0213 19:34:26.895554 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.895601 kubelet[2796]: E0213 19:34:26.895568 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.895747 kubelet[2796]: E0213 19:34:26.895734 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.895747 kubelet[2796]: W0213 19:34:26.895745 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.895788 kubelet[2796]: E0213 19:34:26.895759 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.895981 kubelet[2796]: E0213 19:34:26.895967 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.895981 kubelet[2796]: W0213 19:34:26.895977 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.896039 kubelet[2796]: E0213 19:34:26.895990 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.896215 kubelet[2796]: E0213 19:34:26.896201 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.896215 kubelet[2796]: W0213 19:34:26.896213 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.896262 kubelet[2796]: E0213 19:34:26.896226 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.896429 kubelet[2796]: E0213 19:34:26.896413 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.896429 kubelet[2796]: W0213 19:34:26.896425 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.896504 kubelet[2796]: E0213 19:34:26.896438 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:26.938647 kubelet[2796]: E0213 19:34:26.938611 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:26.938647 kubelet[2796]: W0213 19:34:26.938632 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:26.938647 kubelet[2796]: E0213 19:34:26.938652 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:27.005067 containerd[1579]: time="2025-02-13T19:34:27.004782831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:27.005067 containerd[1579]: time="2025-02-13T19:34:27.004858794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:27.005067 containerd[1579]: time="2025-02-13T19:34:27.004871008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:27.005811 containerd[1579]: time="2025-02-13T19:34:27.005737718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:27.056157 containerd[1579]: time="2025-02-13T19:34:27.055876226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c7tpt,Uid:1157e559-65ca-4388-8be5-d109dfc71dc1,Namespace:calico-system,Attempt:0,} returns sandbox id \"19894ea7d6a4f230d7282755f2872cc89e08b68c343ee81557ac447474f43f79\"" Feb 13 19:34:27.056670 kubelet[2796]: E0213 19:34:27.056651 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:28.055895 kubelet[2796]: E0213 19:34:28.055847 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:29.453336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631942093.mount: Deactivated successfully. Feb 13 19:34:30.042218 containerd[1579]: time="2025-02-13T19:34:30.042154020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:30.056862 kubelet[2796]: E0213 19:34:30.056789 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:30.067287 containerd[1579]: time="2025-02-13T19:34:30.067234386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 19:34:30.094494 containerd[1579]: time="2025-02-13T19:34:30.094451159Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:30.131359 containerd[1579]: time="2025-02-13T19:34:30.131328589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:30.131978 containerd[1579]: time="2025-02-13T19:34:30.131953941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.299034063s" Feb 13 19:34:30.132048 containerd[1579]: time="2025-02-13T19:34:30.131980881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:34:30.133026 containerd[1579]: time="2025-02-13T19:34:30.132994356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:34:30.139133 containerd[1579]: time="2025-02-13T19:34:30.139094325Z" level=info msg="CreateContainer within sandbox \"10ce1f74f9926f32e2cfe177c3eb25f8e463ec376e15449f9e9df929bcf26f88\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:34:30.560243 containerd[1579]: time="2025-02-13T19:34:30.560189305Z" level=info msg="CreateContainer within sandbox \"10ce1f74f9926f32e2cfe177c3eb25f8e463ec376e15449f9e9df929bcf26f88\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"87c7fa8286b5d98671fca3b0914a91320ef7c341d80d2c727a3c30427c048975\"" Feb 13 19:34:30.560691 containerd[1579]: time="2025-02-13T19:34:30.560626692Z" level=info msg="StartContainer for \"87c7fa8286b5d98671fca3b0914a91320ef7c341d80d2c727a3c30427c048975\"" Feb 13 19:34:30.724632 containerd[1579]: time="2025-02-13T19:34:30.724581107Z" level=info msg="StartContainer for \"87c7fa8286b5d98671fca3b0914a91320ef7c341d80d2c727a3c30427c048975\" returns successfully" Feb 13 19:34:31.137335 kubelet[2796]: E0213 19:34:31.137300 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:31.215248 kubelet[2796]: E0213 19:34:31.215214 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.215248 kubelet[2796]: W0213 19:34:31.215234 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.215248 kubelet[2796]: E0213 19:34:31.215252 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.215469 kubelet[2796]: E0213 19:34:31.215447 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.215469 kubelet[2796]: W0213 19:34:31.215458 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.215469 kubelet[2796]: E0213 19:34:31.215466 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.215668 kubelet[2796]: E0213 19:34:31.215646 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.215668 kubelet[2796]: W0213 19:34:31.215656 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.215668 kubelet[2796]: E0213 19:34:31.215664 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.215861 kubelet[2796]: E0213 19:34:31.215846 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.215861 kubelet[2796]: W0213 19:34:31.215858 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.215933 kubelet[2796]: E0213 19:34:31.215866 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.216062 kubelet[2796]: E0213 19:34:31.216048 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.216062 kubelet[2796]: W0213 19:34:31.216058 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.216130 kubelet[2796]: E0213 19:34:31.216066 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.216251 kubelet[2796]: E0213 19:34:31.216236 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.216251 kubelet[2796]: W0213 19:34:31.216246 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.216324 kubelet[2796]: E0213 19:34:31.216254 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.216435 kubelet[2796]: E0213 19:34:31.216420 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.216435 kubelet[2796]: W0213 19:34:31.216430 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.216435 kubelet[2796]: E0213 19:34:31.216437 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.216616 kubelet[2796]: E0213 19:34:31.216602 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.216616 kubelet[2796]: W0213 19:34:31.216611 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.216616 kubelet[2796]: E0213 19:34:31.216619 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.216823 kubelet[2796]: E0213 19:34:31.216790 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.216823 kubelet[2796]: W0213 19:34:31.216814 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.216823 kubelet[2796]: E0213 19:34:31.216822 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.217012 kubelet[2796]: E0213 19:34:31.216993 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.217012 kubelet[2796]: W0213 19:34:31.217002 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.217012 kubelet[2796]: E0213 19:34:31.217009 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.217207 kubelet[2796]: E0213 19:34:31.217188 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.217207 kubelet[2796]: W0213 19:34:31.217197 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.217207 kubelet[2796]: E0213 19:34:31.217205 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.217390 kubelet[2796]: E0213 19:34:31.217372 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.217390 kubelet[2796]: W0213 19:34:31.217381 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.217390 kubelet[2796]: E0213 19:34:31.217389 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.217575 kubelet[2796]: E0213 19:34:31.217555 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.217575 kubelet[2796]: W0213 19:34:31.217565 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.217575 kubelet[2796]: E0213 19:34:31.217572 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.217756 kubelet[2796]: E0213 19:34:31.217736 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.217756 kubelet[2796]: W0213 19:34:31.217746 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.217756 kubelet[2796]: E0213 19:34:31.217753 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.217971 kubelet[2796]: E0213 19:34:31.217952 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.217971 kubelet[2796]: W0213 19:34:31.217963 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.217971 kubelet[2796]: E0213 19:34:31.217971 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.223317 kubelet[2796]: E0213 19:34:31.223290 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.223317 kubelet[2796]: W0213 19:34:31.223309 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.223405 kubelet[2796]: E0213 19:34:31.223328 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.223605 kubelet[2796]: E0213 19:34:31.223583 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.223605 kubelet[2796]: W0213 19:34:31.223596 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.223677 kubelet[2796]: E0213 19:34:31.223608 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.223849 kubelet[2796]: E0213 19:34:31.223833 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.223849 kubelet[2796]: W0213 19:34:31.223845 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.223932 kubelet[2796]: E0213 19:34:31.223858 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.224092 kubelet[2796]: E0213 19:34:31.224066 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.224092 kubelet[2796]: W0213 19:34:31.224080 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.224157 kubelet[2796]: E0213 19:34:31.224094 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.224281 kubelet[2796]: E0213 19:34:31.224267 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.224281 kubelet[2796]: W0213 19:34:31.224277 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.224349 kubelet[2796]: E0213 19:34:31.224288 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.224460 kubelet[2796]: E0213 19:34:31.224439 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.224460 kubelet[2796]: W0213 19:34:31.224450 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.224460 kubelet[2796]: E0213 19:34:31.224463 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.224740 kubelet[2796]: E0213 19:34:31.224709 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.224740 kubelet[2796]: W0213 19:34:31.224723 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.224740 kubelet[2796]: E0213 19:34:31.224740 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.225013 kubelet[2796]: E0213 19:34:31.224996 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.225013 kubelet[2796]: W0213 19:34:31.225010 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.225108 kubelet[2796]: E0213 19:34:31.225031 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.225276 kubelet[2796]: E0213 19:34:31.225262 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.225313 kubelet[2796]: W0213 19:34:31.225276 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.225398 kubelet[2796]: E0213 19:34:31.225322 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.225520 kubelet[2796]: E0213 19:34:31.225488 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.225520 kubelet[2796]: W0213 19:34:31.225508 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.225566 kubelet[2796]: E0213 19:34:31.225540 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.225831 kubelet[2796]: E0213 19:34:31.225723 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.225831 kubelet[2796]: W0213 19:34:31.225735 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.225831 kubelet[2796]: E0213 19:34:31.225777 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.225996 kubelet[2796]: E0213 19:34:31.225983 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.226067 kubelet[2796]: W0213 19:34:31.226041 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.226067 kubelet[2796]: E0213 19:34:31.226063 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.226907 kubelet[2796]: E0213 19:34:31.226884 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.226907 kubelet[2796]: W0213 19:34:31.226898 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.227016 kubelet[2796]: E0213 19:34:31.226918 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.227229 kubelet[2796]: E0213 19:34:31.227206 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.227229 kubelet[2796]: W0213 19:34:31.227224 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.227309 kubelet[2796]: E0213 19:34:31.227248 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.227508 kubelet[2796]: E0213 19:34:31.227493 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.227508 kubelet[2796]: W0213 19:34:31.227507 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.227571 kubelet[2796]: E0213 19:34:31.227518 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.227758 kubelet[2796]: E0213 19:34:31.227743 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.227758 kubelet[2796]: W0213 19:34:31.227756 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.227840 kubelet[2796]: E0213 19:34:31.227766 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.228028 kubelet[2796]: E0213 19:34:31.228005 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.228052 kubelet[2796]: W0213 19:34:31.228018 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.228052 kubelet[2796]: E0213 19:34:31.228040 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:31.228490 kubelet[2796]: E0213 19:34:31.228466 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:31.228490 kubelet[2796]: W0213 19:34:31.228481 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:31.228534 kubelet[2796]: E0213 19:34:31.228491 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.056215 kubelet[2796]: E0213 19:34:32.056163 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:32.138624 kubelet[2796]: I0213 19:34:32.138589 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:32.139163 kubelet[2796]: E0213 19:34:32.139146 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:32.224588 kubelet[2796]: E0213 19:34:32.224550 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.224588 kubelet[2796]: W0213 19:34:32.224572 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.224588 kubelet[2796]: E0213 19:34:32.224591 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.224872 kubelet[2796]: E0213 19:34:32.224859 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.224872 kubelet[2796]: W0213 19:34:32.224870 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.224935 kubelet[2796]: E0213 19:34:32.224879 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.225108 kubelet[2796]: E0213 19:34:32.225095 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.225108 kubelet[2796]: W0213 19:34:32.225106 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.225161 kubelet[2796]: E0213 19:34:32.225114 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.225317 kubelet[2796]: E0213 19:34:32.225304 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.225317 kubelet[2796]: W0213 19:34:32.225315 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.225367 kubelet[2796]: E0213 19:34:32.225322 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.225532 kubelet[2796]: E0213 19:34:32.225512 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.225532 kubelet[2796]: W0213 19:34:32.225524 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.225532 kubelet[2796]: E0213 19:34:32.225531 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.225721 kubelet[2796]: E0213 19:34:32.225708 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.225721 kubelet[2796]: W0213 19:34:32.225718 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.225778 kubelet[2796]: E0213 19:34:32.225727 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.225925 kubelet[2796]: E0213 19:34:32.225912 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.225925 kubelet[2796]: W0213 19:34:32.225922 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.225989 kubelet[2796]: E0213 19:34:32.225930 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.226130 kubelet[2796]: E0213 19:34:32.226117 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.226130 kubelet[2796]: W0213 19:34:32.226127 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.226182 kubelet[2796]: E0213 19:34:32.226135 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.226322 kubelet[2796]: E0213 19:34:32.226309 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.226322 kubelet[2796]: W0213 19:34:32.226319 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.226377 kubelet[2796]: E0213 19:34:32.226326 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.226533 kubelet[2796]: E0213 19:34:32.226520 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.226533 kubelet[2796]: W0213 19:34:32.226530 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.226587 kubelet[2796]: E0213 19:34:32.226538 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.226715 kubelet[2796]: E0213 19:34:32.226703 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.226715 kubelet[2796]: W0213 19:34:32.226712 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.226768 kubelet[2796]: E0213 19:34:32.226720 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.226914 kubelet[2796]: E0213 19:34:32.226900 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.226914 kubelet[2796]: W0213 19:34:32.226911 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.226968 kubelet[2796]: E0213 19:34:32.226918 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.227117 kubelet[2796]: E0213 19:34:32.227104 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.227117 kubelet[2796]: W0213 19:34:32.227114 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.227162 kubelet[2796]: E0213 19:34:32.227122 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.227305 kubelet[2796]: E0213 19:34:32.227292 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.227305 kubelet[2796]: W0213 19:34:32.227302 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.227355 kubelet[2796]: E0213 19:34:32.227308 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.227484 kubelet[2796]: E0213 19:34:32.227472 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.227484 kubelet[2796]: W0213 19:34:32.227483 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.227528 kubelet[2796]: E0213 19:34:32.227490 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.228775 kubelet[2796]: E0213 19:34:32.228747 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.228775 kubelet[2796]: W0213 19:34:32.228769 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.228849 kubelet[2796]: E0213 19:34:32.228789 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.229038 kubelet[2796]: E0213 19:34:32.229019 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.229038 kubelet[2796]: W0213 19:34:32.229033 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.229089 kubelet[2796]: E0213 19:34:32.229049 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.229915 kubelet[2796]: E0213 19:34:32.229890 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.229915 kubelet[2796]: W0213 19:34:32.229904 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.230092 kubelet[2796]: E0213 19:34:32.229937 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.230216 kubelet[2796]: E0213 19:34:32.230174 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.230216 kubelet[2796]: W0213 19:34:32.230187 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.230267 kubelet[2796]: E0213 19:34:32.230235 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.230384 kubelet[2796]: E0213 19:34:32.230370 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.230384 kubelet[2796]: W0213 19:34:32.230380 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.230444 kubelet[2796]: E0213 19:34:32.230408 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.230574 kubelet[2796]: E0213 19:34:32.230554 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.230574 kubelet[2796]: W0213 19:34:32.230564 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.230634 kubelet[2796]: E0213 19:34:32.230610 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.230734 kubelet[2796]: E0213 19:34:32.230720 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.230734 kubelet[2796]: W0213 19:34:32.230731 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.230790 kubelet[2796]: E0213 19:34:32.230744 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.230959 kubelet[2796]: E0213 19:34:32.230945 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.230959 kubelet[2796]: W0213 19:34:32.230956 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.231015 kubelet[2796]: E0213 19:34:32.230968 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.231170 kubelet[2796]: E0213 19:34:32.231157 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.231170 kubelet[2796]: W0213 19:34:32.231167 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.231218 kubelet[2796]: E0213 19:34:32.231178 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.231430 kubelet[2796]: E0213 19:34:32.231413 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.231430 kubelet[2796]: W0213 19:34:32.231427 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.231480 kubelet[2796]: E0213 19:34:32.231443 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.231644 kubelet[2796]: E0213 19:34:32.231629 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.231644 kubelet[2796]: W0213 19:34:32.231640 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.231702 kubelet[2796]: E0213 19:34:32.231652 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.231930 kubelet[2796]: E0213 19:34:32.231912 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.231966 kubelet[2796]: W0213 19:34:32.231929 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.231966 kubelet[2796]: E0213 19:34:32.231952 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.232185 kubelet[2796]: E0213 19:34:32.232171 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.232185 kubelet[2796]: W0213 19:34:32.232182 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.232240 kubelet[2796]: E0213 19:34:32.232196 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.232425 kubelet[2796]: E0213 19:34:32.232412 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.232447 kubelet[2796]: W0213 19:34:32.232424 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.232447 kubelet[2796]: E0213 19:34:32.232438 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.232700 kubelet[2796]: E0213 19:34:32.232675 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.232700 kubelet[2796]: W0213 19:34:32.232692 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.232754 kubelet[2796]: E0213 19:34:32.232705 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.232931 kubelet[2796]: E0213 19:34:32.232916 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.232931 kubelet[2796]: W0213 19:34:32.232928 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.232976 kubelet[2796]: E0213 19:34:32.232940 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.233213 kubelet[2796]: E0213 19:34:32.233198 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.233213 kubelet[2796]: W0213 19:34:32.233210 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.233275 kubelet[2796]: E0213 19:34:32.233224 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.233439 kubelet[2796]: E0213 19:34:32.233426 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:32.233439 kubelet[2796]: W0213 19:34:32.233437 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:32.233481 kubelet[2796]: E0213 19:34:32.233447 2796 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:32.817956 containerd[1579]: time="2025-02-13T19:34:32.817892251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:32.850326 containerd[1579]: time="2025-02-13T19:34:32.850220195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 19:34:32.897547 containerd[1579]: time="2025-02-13T19:34:32.897355115Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:32.918155 containerd[1579]: time="2025-02-13T19:34:32.918114478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:32.920199 containerd[1579]: time="2025-02-13T19:34:32.920075340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.787043863s" Feb 13 19:34:32.920199 containerd[1579]: time="2025-02-13T19:34:32.920127829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:34:32.923972 containerd[1579]: time="2025-02-13T19:34:32.923939836Z" level=info msg="CreateContainer within sandbox \"19894ea7d6a4f230d7282755f2872cc89e08b68c343ee81557ac447474f43f79\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:34:33.318059 containerd[1579]: time="2025-02-13T19:34:33.318013671Z" level=info msg="CreateContainer within sandbox \"19894ea7d6a4f230d7282755f2872cc89e08b68c343ee81557ac447474f43f79\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0a0c8021d23ad0b4486024772150fbe8833ece7b0d67a79033b4c353391a0e69\"" Feb 13 19:34:33.318618 containerd[1579]: time="2025-02-13T19:34:33.318580140Z" level=info msg="StartContainer for \"0a0c8021d23ad0b4486024772150fbe8833ece7b0d67a79033b4c353391a0e69\"" Feb 13 19:34:33.417751 containerd[1579]: time="2025-02-13T19:34:33.417692472Z" level=info msg="StartContainer for \"0a0c8021d23ad0b4486024772150fbe8833ece7b0d67a79033b4c353391a0e69\" returns successfully" Feb 13 19:34:34.056410 kubelet[2796]: E0213 19:34:34.056355 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:34.143232 kubelet[2796]: E0213 19:34:34.143202 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:34.168963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a0c8021d23ad0b4486024772150fbe8833ece7b0d67a79033b4c353391a0e69-rootfs.mount: Deactivated successfully. Feb 13 19:34:34.595854 kubelet[2796]: I0213 19:34:34.595557 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fd5668d57-xgh97" podStartSLOduration=5.295182388 podStartE2EDuration="8.595538401s" podCreationTimestamp="2025-02-13 19:34:26 +0000 UTC" firstStartedPulling="2025-02-13 19:34:26.832469104 +0000 UTC m=+19.868694806" lastFinishedPulling="2025-02-13 19:34:30.132825117 +0000 UTC m=+23.169050819" observedRunningTime="2025-02-13 19:34:31.197633443 +0000 UTC m=+24.233859145" watchObservedRunningTime="2025-02-13 19:34:34.595538401 +0000 UTC m=+27.631764113" Feb 13 19:34:34.664647 containerd[1579]: time="2025-02-13T19:34:34.664577998Z" level=info msg="shim disconnected" id=0a0c8021d23ad0b4486024772150fbe8833ece7b0d67a79033b4c353391a0e69 namespace=k8s.io Feb 13 19:34:34.664647 containerd[1579]: time="2025-02-13T19:34:34.664633593Z" level=warning msg="cleaning up after shim disconnected" id=0a0c8021d23ad0b4486024772150fbe8833ece7b0d67a79033b4c353391a0e69 namespace=k8s.io Feb 13 19:34:34.664647 containerd[1579]: time="2025-02-13T19:34:34.664641588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:35.097176 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:54494.service - OpenSSH per-connection server daemon (10.0.0.1:54494). Feb 13 19:34:35.145751 kubelet[2796]: E0213 19:34:35.145721 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:35.146632 containerd[1579]: time="2025-02-13T19:34:35.146596748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:34:35.156725 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 54494 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:34:35.159681 sshd-session[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:35.164889 systemd-logind[1555]: New session 8 of user core. Feb 13 19:34:35.177047 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:34:35.363943 sshd[3554]: Connection closed by 10.0.0.1 port 54494 Feb 13 19:34:35.364308 sshd-session[3551]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:35.369528 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:54494.service: Deactivated successfully. Feb 13 19:34:35.372101 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:34:35.372199 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:34:35.373469 systemd-logind[1555]: Removed session 8. Feb 13 19:34:36.055856 kubelet[2796]: E0213 19:34:36.055795 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:38.056463 kubelet[2796]: E0213 19:34:38.056415 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:40.059424 kubelet[2796]: E0213 19:34:40.059375 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:40.380413 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:50922.service - OpenSSH per-connection server daemon (10.0.0.1:50922). Feb 13 19:34:40.449788 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 50922 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:34:40.451766 sshd-session[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:40.456757 systemd-logind[1555]: New session 9 of user core. Feb 13 19:34:40.468310 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:34:40.782605 sshd[3579]: Connection closed by 10.0.0.1 port 50922 Feb 13 19:34:40.782956 sshd-session[3576]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:40.788008 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:50922.service: Deactivated successfully. Feb 13 19:34:40.790849 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:34:40.790983 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:34:40.792038 systemd-logind[1555]: Removed session 9. Feb 13 19:34:40.938177 containerd[1579]: time="2025-02-13T19:34:40.938056441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:41.293649 containerd[1579]: time="2025-02-13T19:34:41.293559650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:34:41.436316 containerd[1579]: time="2025-02-13T19:34:41.436259849Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:41.498480 containerd[1579]: time="2025-02-13T19:34:41.498424770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:41.499304 containerd[1579]: time="2025-02-13T19:34:41.499273677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.352634279s" Feb 13 19:34:41.499304 containerd[1579]: time="2025-02-13T19:34:41.499300177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:34:41.502001 containerd[1579]: time="2025-02-13T19:34:41.501974702Z" level=info msg="CreateContainer within sandbox \"19894ea7d6a4f230d7282755f2872cc89e08b68c343ee81557ac447474f43f79\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:34:41.650465 containerd[1579]: time="2025-02-13T19:34:41.650409876Z" level=info msg="CreateContainer within sandbox \"19894ea7d6a4f230d7282755f2872cc89e08b68c343ee81557ac447474f43f79\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"637d788b06a7744b6845709688f7e178fc777d859cfbd8cf7ceb3b4adeb2d478\"" Feb 13 19:34:41.650983 containerd[1579]: time="2025-02-13T19:34:41.650947206Z" level=info msg="StartContainer for \"637d788b06a7744b6845709688f7e178fc777d859cfbd8cf7ceb3b4adeb2d478\"" Feb 13 19:34:41.708204 containerd[1579]: time="2025-02-13T19:34:41.708144636Z" level=info msg="StartContainer for \"637d788b06a7744b6845709688f7e178fc777d859cfbd8cf7ceb3b4adeb2d478\" returns successfully" Feb 13 19:34:41.830615 kubelet[2796]: E0213 19:34:41.830582 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:42.056196 kubelet[2796]: E0213 19:34:42.056066 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:42.644653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-637d788b06a7744b6845709688f7e178fc777d859cfbd8cf7ceb3b4adeb2d478-rootfs.mount: Deactivated successfully. Feb 13 19:34:42.648968 containerd[1579]: time="2025-02-13T19:34:42.648910058Z" level=info msg="shim disconnected" id=637d788b06a7744b6845709688f7e178fc777d859cfbd8cf7ceb3b4adeb2d478 namespace=k8s.io Feb 13 19:34:42.649276 containerd[1579]: time="2025-02-13T19:34:42.648969900Z" level=warning msg="cleaning up after shim disconnected" id=637d788b06a7744b6845709688f7e178fc777d859cfbd8cf7ceb3b4adeb2d478 namespace=k8s.io Feb 13 19:34:42.649276 containerd[1579]: time="2025-02-13T19:34:42.648977925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:42.706204 kubelet[2796]: I0213 19:34:42.706178 2796 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:34:42.722876 kubelet[2796]: I0213 19:34:42.722783 2796 topology_manager.go:215] "Topology Admit Handler" podUID="7703fc78-f952-4a15-b2f4-c2b67bf6b32a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-krffx" Feb 13 19:34:42.726794 kubelet[2796]: I0213 19:34:42.726699 2796 topology_manager.go:215] "Topology Admit Handler" podUID="3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-97hdm" Feb 13 19:34:42.726949 kubelet[2796]: I0213 19:34:42.726846 2796 topology_manager.go:215] "Topology Admit Handler" podUID="66ee47e8-47e1-43f0-b3d2-64715b2b7237" podNamespace="calico-system" podName="calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:42.728185 kubelet[2796]: I0213 19:34:42.728112 2796 topology_manager.go:215] "Topology Admit Handler" podUID="7ddf3646-1cea-4076-a090-fff52499412c" podNamespace="calico-apiserver" podName="calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:42.729016 kubelet[2796]: I0213 19:34:42.728995 2796 topology_manager.go:215] "Topology Admit Handler" podUID="23896514-92a6-4ab0-b171-3b7efd7da770" podNamespace="calico-apiserver" podName="calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:42.833467 kubelet[2796]: E0213 19:34:42.833429 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:42.834569 containerd[1579]: time="2025-02-13T19:34:42.834537101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:34:42.904628 kubelet[2796]: I0213 19:34:42.904521 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66ee47e8-47e1-43f0-b3d2-64715b2b7237-tigera-ca-bundle\") pod \"calico-kube-controllers-6d854b45db-vjxr2\" (UID: \"66ee47e8-47e1-43f0-b3d2-64715b2b7237\") " pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:42.904628 kubelet[2796]: I0213 19:34:42.904556 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrtxn\" (UniqueName: \"kubernetes.io/projected/23896514-92a6-4ab0-b171-3b7efd7da770-kube-api-access-lrtxn\") pod \"calico-apiserver-87cc8ff8d-j5sxt\" (UID: \"23896514-92a6-4ab0-b171-3b7efd7da770\") " pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:42.904761 kubelet[2796]: I0213 19:34:42.904662 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/23896514-92a6-4ab0-b171-3b7efd7da770-calico-apiserver-certs\") pod \"calico-apiserver-87cc8ff8d-j5sxt\" (UID: \"23896514-92a6-4ab0-b171-3b7efd7da770\") " pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:42.904761 kubelet[2796]: I0213 19:34:42.904695 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7703fc78-f952-4a15-b2f4-c2b67bf6b32a-config-volume\") pod \"coredns-7db6d8ff4d-krffx\" (UID: \"7703fc78-f952-4a15-b2f4-c2b67bf6b32a\") " pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:42.904761 kubelet[2796]: I0213 19:34:42.904714 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptfw\" (UniqueName: \"kubernetes.io/projected/3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6-kube-api-access-tptfw\") pod \"coredns-7db6d8ff4d-97hdm\" (UID: \"3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6\") " pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:42.904761 kubelet[2796]: I0213 19:34:42.904740 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5ncc\" (UniqueName: \"kubernetes.io/projected/7ddf3646-1cea-4076-a090-fff52499412c-kube-api-access-r5ncc\") pod \"calico-apiserver-87cc8ff8d-k264t\" (UID: \"7ddf3646-1cea-4076-a090-fff52499412c\") " pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:42.904761 kubelet[2796]: I0213 19:34:42.904757 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6-config-volume\") pod \"coredns-7db6d8ff4d-97hdm\" (UID: \"3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6\") " pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:42.904903 kubelet[2796]: I0213 19:34:42.904795 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ddf3646-1cea-4076-a090-fff52499412c-calico-apiserver-certs\") pod \"calico-apiserver-87cc8ff8d-k264t\" (UID: \"7ddf3646-1cea-4076-a090-fff52499412c\") " pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:42.904903 kubelet[2796]: I0213 19:34:42.904841 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrkzk\" (UniqueName: \"kubernetes.io/projected/7703fc78-f952-4a15-b2f4-c2b67bf6b32a-kube-api-access-vrkzk\") pod \"coredns-7db6d8ff4d-krffx\" (UID: \"7703fc78-f952-4a15-b2f4-c2b67bf6b32a\") " pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:42.904903 kubelet[2796]: I0213 19:34:42.904859 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx684\" (UniqueName: \"kubernetes.io/projected/66ee47e8-47e1-43f0-b3d2-64715b2b7237-kube-api-access-gx684\") pod \"calico-kube-controllers-6d854b45db-vjxr2\" (UID: \"66ee47e8-47e1-43f0-b3d2-64715b2b7237\") " pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:43.027878 kubelet[2796]: E0213 19:34:43.027844 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:43.028454 containerd[1579]: time="2025-02-13T19:34:43.028412125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:43.032601 containerd[1579]: time="2025-02-13T19:34:43.032559878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:43.035351 containerd[1579]: time="2025-02-13T19:34:43.035321184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:34:43.037667 kubelet[2796]: E0213 19:34:43.037641 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:43.038103 containerd[1579]: time="2025-02-13T19:34:43.038063352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:43.038591 containerd[1579]: time="2025-02-13T19:34:43.038461682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:34:43.160087 containerd[1579]: time="2025-02-13T19:34:43.159729433Z" level=error msg="Failed to destroy network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.164771 containerd[1579]: time="2025-02-13T19:34:43.164667514Z" level=error msg="Failed to destroy network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.168126 containerd[1579]: time="2025-02-13T19:34:43.168103608Z" level=error msg="Failed to destroy network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.168508 containerd[1579]: time="2025-02-13T19:34:43.168482240Z" level=error msg="encountered an error cleaning up failed sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.168575 containerd[1579]: time="2025-02-13T19:34:43.168548566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.168738 containerd[1579]: time="2025-02-13T19:34:43.168561680Z" level=error msg="encountered an error cleaning up failed sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.168738 containerd[1579]: time="2025-02-13T19:34:43.168671286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.175883 containerd[1579]: time="2025-02-13T19:34:43.175852887Z" level=error msg="encountered an error cleaning up failed sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.175970 containerd[1579]: time="2025-02-13T19:34:43.175952174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.180959 containerd[1579]: time="2025-02-13T19:34:43.180903270Z" level=error msg="Failed to destroy network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.181329 containerd[1579]: time="2025-02-13T19:34:43.181300807Z" level=error msg="encountered an error cleaning up failed sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.181378 containerd[1579]: time="2025-02-13T19:34:43.181354829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.187007 kubelet[2796]: E0213 19:34:43.186959 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.187073 kubelet[2796]: E0213 19:34:43.187010 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.187073 kubelet[2796]: E0213 19:34:43.187033 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:43.187073 kubelet[2796]: E0213 19:34:43.187056 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:43.187073 kubelet[2796]: E0213 19:34:43.187070 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:43.187853 kubelet[2796]: E0213 19:34:43.187075 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:43.187853 kubelet[2796]: E0213 19:34:43.187112 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" podUID="23896514-92a6-4ab0-b171-3b7efd7da770" Feb 13 19:34:43.187951 kubelet[2796]: E0213 19:34:43.187115 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" podUID="66ee47e8-47e1-43f0-b3d2-64715b2b7237" Feb 13 19:34:43.187951 kubelet[2796]: E0213 19:34:43.187141 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.187951 kubelet[2796]: E0213 19:34:43.187159 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:43.188046 kubelet[2796]: E0213 19:34:43.187173 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:43.188046 kubelet[2796]: E0213 19:34:43.187195 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-krffx" podUID="7703fc78-f952-4a15-b2f4-c2b67bf6b32a" Feb 13 19:34:43.188046 kubelet[2796]: E0213 19:34:43.186974 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.188138 kubelet[2796]: E0213 19:34:43.187224 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:43.188138 kubelet[2796]: E0213 19:34:43.187236 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:43.188138 kubelet[2796]: E0213 19:34:43.187254 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" podUID="7ddf3646-1cea-4076-a090-fff52499412c" Feb 13 19:34:43.204924 containerd[1579]: time="2025-02-13T19:34:43.204867555Z" level=error msg="Failed to destroy network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.205269 containerd[1579]: time="2025-02-13T19:34:43.205243723Z" level=error msg="encountered an error cleaning up failed sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.205313 containerd[1579]: time="2025-02-13T19:34:43.205297154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.205531 kubelet[2796]: E0213 19:34:43.205496 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.205599 kubelet[2796]: E0213 19:34:43.205550 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:43.205599 kubelet[2796]: E0213 19:34:43.205571 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:43.205659 kubelet[2796]: E0213 19:34:43.205613 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-97hdm" podUID="3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6" Feb 13 19:34:43.835591 kubelet[2796]: I0213 19:34:43.835563 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1" Feb 13 19:34:43.836248 kubelet[2796]: I0213 19:34:43.836233 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e" Feb 13 19:34:43.838174 kubelet[2796]: I0213 19:34:43.838148 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f" Feb 13 19:34:43.839223 kubelet[2796]: I0213 19:34:43.838964 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4" Feb 13 19:34:43.847842 containerd[1579]: time="2025-02-13T19:34:43.847781992Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" Feb 13 19:34:43.856399 containerd[1579]: time="2025-02-13T19:34:43.855477159Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" Feb 13 19:34:43.856399 containerd[1579]: time="2025-02-13T19:34:43.855519739Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" Feb 13 19:34:43.856399 containerd[1579]: time="2025-02-13T19:34:43.855754231Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" Feb 13 19:34:43.857879 containerd[1579]: time="2025-02-13T19:34:43.857703808Z" level=info msg="Ensure that sandbox fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f in task-service has been cleanup successfully" Feb 13 19:34:43.857879 containerd[1579]: time="2025-02-13T19:34:43.857721531Z" level=info msg="Ensure that sandbox b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4 in task-service has been cleanup successfully" Feb 13 19:34:43.858035 containerd[1579]: time="2025-02-13T19:34:43.858006127Z" level=info msg="TearDown network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" successfully" Feb 13 19:34:43.858035 containerd[1579]: time="2025-02-13T19:34:43.858024181Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" returns successfully" Feb 13 19:34:43.858181 containerd[1579]: time="2025-02-13T19:34:43.857723195Z" level=info msg="Ensure that sandbox e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e in task-service has been cleanup successfully" Feb 13 19:34:43.858363 containerd[1579]: time="2025-02-13T19:34:43.858252351Z" level=info msg="TearDown network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" successfully" Feb 13 19:34:43.858363 containerd[1579]: time="2025-02-13T19:34:43.858264664Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" returns successfully" Feb 13 19:34:43.858363 containerd[1579]: time="2025-02-13T19:34:43.858280123Z" level=info msg="TearDown network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" successfully" Feb 13 19:34:43.858363 containerd[1579]: time="2025-02-13T19:34:43.858298327Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" returns successfully" Feb 13 19:34:43.858500 kubelet[2796]: E0213 19:34:43.858274 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:43.858500 kubelet[2796]: E0213 19:34:43.858481 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:43.858960 containerd[1579]: time="2025-02-13T19:34:43.858844654Z" level=info msg="Ensure that sandbox 58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1 in task-service has been cleanup successfully" Feb 13 19:34:43.859270 containerd[1579]: time="2025-02-13T19:34:43.859022720Z" level=info msg="TearDown network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" successfully" Feb 13 19:34:43.859270 containerd[1579]: time="2025-02-13T19:34:43.859041045Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" returns successfully" Feb 13 19:34:43.859270 containerd[1579]: time="2025-02-13T19:34:43.859068496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:1,}" Feb 13 19:34:43.859270 containerd[1579]: time="2025-02-13T19:34:43.859083905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:1,}" Feb 13 19:34:43.859270 containerd[1579]: time="2025-02-13T19:34:43.859107229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:1,}" Feb 13 19:34:43.859650 containerd[1579]: time="2025-02-13T19:34:43.859621245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:34:43.859857 kubelet[2796]: I0213 19:34:43.859837 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a" Feb 13 19:34:43.860362 containerd[1579]: time="2025-02-13T19:34:43.860339457Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" Feb 13 19:34:43.860528 containerd[1579]: time="2025-02-13T19:34:43.860500730Z" level=info msg="Ensure that sandbox c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a in task-service has been cleanup successfully" Feb 13 19:34:43.860685 containerd[1579]: time="2025-02-13T19:34:43.860665150Z" level=info msg="TearDown network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" successfully" Feb 13 19:34:43.860685 containerd[1579]: time="2025-02-13T19:34:43.860681520Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" returns successfully" Feb 13 19:34:43.861004 containerd[1579]: time="2025-02-13T19:34:43.860982697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:34:43.982032 containerd[1579]: time="2025-02-13T19:34:43.981444051Z" level=error msg="Failed to destroy network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.982032 containerd[1579]: time="2025-02-13T19:34:43.981879019Z" level=error msg="encountered an error cleaning up failed sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.982032 containerd[1579]: time="2025-02-13T19:34:43.981924845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.982206 kubelet[2796]: E0213 19:34:43.982153 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.982246 kubelet[2796]: E0213 19:34:43.982211 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:43.982246 kubelet[2796]: E0213 19:34:43.982232 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:43.982330 kubelet[2796]: E0213 19:34:43.982269 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-97hdm" podUID="3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6" Feb 13 19:34:43.997972 containerd[1579]: time="2025-02-13T19:34:43.997834057Z" level=error msg="Failed to destroy network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.998343 containerd[1579]: time="2025-02-13T19:34:43.998316444Z" level=error msg="encountered an error cleaning up failed sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.998442 containerd[1579]: time="2025-02-13T19:34:43.998424818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.998816 kubelet[2796]: E0213 19:34:43.998751 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:43.998873 kubelet[2796]: E0213 19:34:43.998836 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:43.998873 kubelet[2796]: E0213 19:34:43.998857 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:43.998921 kubelet[2796]: E0213 19:34:43.998899 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" podUID="66ee47e8-47e1-43f0-b3d2-64715b2b7237" Feb 13 19:34:44.000653 containerd[1579]: time="2025-02-13T19:34:44.000626781Z" level=error msg="Failed to destroy network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.001126 containerd[1579]: time="2025-02-13T19:34:44.001095463Z" level=error msg="encountered an error cleaning up failed sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.001235 containerd[1579]: time="2025-02-13T19:34:44.001211791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.001621 kubelet[2796]: E0213 19:34:44.001489 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.001621 kubelet[2796]: E0213 19:34:44.001579 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:44.001621 kubelet[2796]: E0213 19:34:44.001601 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:44.001777 kubelet[2796]: E0213 19:34:44.001639 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" podUID="7ddf3646-1cea-4076-a090-fff52499412c" Feb 13 19:34:44.006259 containerd[1579]: time="2025-02-13T19:34:44.006218188Z" level=error msg="Failed to destroy network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.006666 containerd[1579]: time="2025-02-13T19:34:44.006629001Z" level=error msg="encountered an error cleaning up failed sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.006709 containerd[1579]: time="2025-02-13T19:34:44.006687250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.006912 kubelet[2796]: E0213 19:34:44.006882 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.006971 kubelet[2796]: E0213 19:34:44.006929 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:44.006971 kubelet[2796]: E0213 19:34:44.006947 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:44.007029 kubelet[2796]: E0213 19:34:44.006983 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-krffx" podUID="7703fc78-f952-4a15-b2f4-c2b67bf6b32a" Feb 13 19:34:44.008141 containerd[1579]: time="2025-02-13T19:34:44.008025177Z" level=error msg="Failed to destroy network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.008478 containerd[1579]: time="2025-02-13T19:34:44.008445208Z" level=error msg="encountered an error cleaning up failed sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.008526 containerd[1579]: time="2025-02-13T19:34:44.008499830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.008729 kubelet[2796]: E0213 19:34:44.008693 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.008773 kubelet[2796]: E0213 19:34:44.008751 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:44.008861 kubelet[2796]: E0213 19:34:44.008780 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:44.008891 kubelet[2796]: E0213 19:34:44.008845 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" podUID="23896514-92a6-4ab0-b171-3b7efd7da770" Feb 13 19:34:44.013511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4-shm.mount: Deactivated successfully. Feb 13 19:34:44.059308 containerd[1579]: time="2025-02-13T19:34:44.059269187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:44.121206 containerd[1579]: time="2025-02-13T19:34:44.121147909Z" level=error msg="Failed to destroy network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.121534 containerd[1579]: time="2025-02-13T19:34:44.121509349Z" level=error msg="encountered an error cleaning up failed sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.121591 containerd[1579]: time="2025-02-13T19:34:44.121562980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.121874 kubelet[2796]: E0213 19:34:44.121817 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:44.121991 kubelet[2796]: E0213 19:34:44.121881 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:44.121991 kubelet[2796]: E0213 19:34:44.121900 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:44.121991 kubelet[2796]: E0213 19:34:44.121943 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dwgr8_calico-system(58082a0b-a7e3-4696-a1fa-c41d6d0bc84c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dwgr8_calico-system(58082a0b-a7e3-4696-a1fa-c41d6d0bc84c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:44.123842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b-shm.mount: Deactivated successfully. Feb 13 19:34:44.863037 kubelet[2796]: I0213 19:34:44.863001 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583" Feb 13 19:34:44.863590 containerd[1579]: time="2025-02-13T19:34:44.863471076Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\"" Feb 13 19:34:44.863909 containerd[1579]: time="2025-02-13T19:34:44.863856300Z" level=info msg="Ensure that sandbox a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583 in task-service has been cleanup successfully" Feb 13 19:34:44.864410 kubelet[2796]: I0213 19:34:44.864390 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f" Feb 13 19:34:44.865107 containerd[1579]: time="2025-02-13T19:34:44.864914911Z" level=info msg="TearDown network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" successfully" Feb 13 19:34:44.865107 containerd[1579]: time="2025-02-13T19:34:44.864944747Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" returns successfully" Feb 13 19:34:44.865107 containerd[1579]: time="2025-02-13T19:34:44.864952963Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\"" Feb 13 19:34:44.865229 containerd[1579]: time="2025-02-13T19:34:44.865179479Z" level=info msg="Ensure that sandbox ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f in task-service has been cleanup successfully" Feb 13 19:34:44.865420 containerd[1579]: time="2025-02-13T19:34:44.865385246Z" level=info msg="TearDown network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" successfully" Feb 13 19:34:44.865420 containerd[1579]: time="2025-02-13T19:34:44.865403280Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" returns successfully" Feb 13 19:34:44.866075 containerd[1579]: time="2025-02-13T19:34:44.866052101Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" Feb 13 19:34:44.866145 containerd[1579]: time="2025-02-13T19:34:44.866128243Z" level=info msg="TearDown network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" successfully" Feb 13 19:34:44.866178 containerd[1579]: time="2025-02-13T19:34:44.866143703Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" returns successfully" Feb 13 19:34:44.866217 containerd[1579]: time="2025-02-13T19:34:44.866197905Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" Feb 13 19:34:44.866312 containerd[1579]: time="2025-02-13T19:34:44.866273747Z" level=info msg="TearDown network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" successfully" Feb 13 19:34:44.866312 containerd[1579]: time="2025-02-13T19:34:44.866291480Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" returns successfully" Feb 13 19:34:44.867247 containerd[1579]: time="2025-02-13T19:34:44.866970227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:2,}" Feb 13 19:34:44.867247 containerd[1579]: time="2025-02-13T19:34:44.866974416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:34:44.867408 kubelet[2796]: I0213 19:34:44.867299 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b" Feb 13 19:34:44.867440 systemd[1]: run-netns-cni\x2d6142b995\x2d1fa6\x2d1bd9\x2d023e\x2d725d39f6e5e2.mount: Deactivated successfully. Feb 13 19:34:44.867907 containerd[1579]: time="2025-02-13T19:34:44.867882263Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\"" Feb 13 19:34:44.868119 containerd[1579]: time="2025-02-13T19:34:44.868054176Z" level=info msg="Ensure that sandbox 719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b in task-service has been cleanup successfully" Feb 13 19:34:44.868392 containerd[1579]: time="2025-02-13T19:34:44.868368167Z" level=info msg="TearDown network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" successfully" Feb 13 19:34:44.868392 containerd[1579]: time="2025-02-13T19:34:44.868389758Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" returns successfully" Feb 13 19:34:44.869032 containerd[1579]: time="2025-02-13T19:34:44.869010235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:1,}" Feb 13 19:34:44.869378 kubelet[2796]: I0213 19:34:44.869358 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2" Feb 13 19:34:44.869889 containerd[1579]: time="2025-02-13T19:34:44.869778199Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\"" Feb 13 19:34:44.870180 containerd[1579]: time="2025-02-13T19:34:44.870022279Z" level=info msg="Ensure that sandbox 8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2 in task-service has been cleanup successfully" Feb 13 19:34:44.870250 containerd[1579]: time="2025-02-13T19:34:44.870223196Z" level=info msg="TearDown network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" successfully" Feb 13 19:34:44.870276 containerd[1579]: time="2025-02-13T19:34:44.870248364Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" returns successfully" Feb 13 19:34:44.870712 containerd[1579]: time="2025-02-13T19:34:44.870546344Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" Feb 13 19:34:44.870712 containerd[1579]: time="2025-02-13T19:34:44.870643818Z" level=info msg="TearDown network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" successfully" Feb 13 19:34:44.870712 containerd[1579]: time="2025-02-13T19:34:44.870653747Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" returns successfully" Feb 13 19:34:44.870941 kubelet[2796]: E0213 19:34:44.870917 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:44.871051 kubelet[2796]: I0213 19:34:44.870920 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b" Feb 13 19:34:44.871187 containerd[1579]: time="2025-02-13T19:34:44.871163545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:2,}" Feb 13 19:34:44.871260 containerd[1579]: time="2025-02-13T19:34:44.871229890Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\"" Feb 13 19:34:44.871392 containerd[1579]: time="2025-02-13T19:34:44.871375033Z" level=info msg="Ensure that sandbox e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b in task-service has been cleanup successfully" Feb 13 19:34:44.871640 containerd[1579]: time="2025-02-13T19:34:44.871622498Z" level=info msg="TearDown network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" successfully" Feb 13 19:34:44.871640 containerd[1579]: time="2025-02-13T19:34:44.871638008Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" returns successfully" Feb 13 19:34:44.872047 containerd[1579]: time="2025-02-13T19:34:44.872019616Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" Feb 13 19:34:44.872134 containerd[1579]: time="2025-02-13T19:34:44.872115295Z" level=info msg="TearDown network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" successfully" Feb 13 19:34:44.872134 containerd[1579]: time="2025-02-13T19:34:44.872131355Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" returns successfully" Feb 13 19:34:44.872412 kubelet[2796]: I0213 19:34:44.872391 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210" Feb 13 19:34:44.872412 kubelet[2796]: E0213 19:34:44.872401 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:44.872592 containerd[1579]: time="2025-02-13T19:34:44.872572095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:2,}" Feb 13 19:34:44.872773 containerd[1579]: time="2025-02-13T19:34:44.872746623Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\"" Feb 13 19:34:44.872922 containerd[1579]: time="2025-02-13T19:34:44.872905031Z" level=info msg="Ensure that sandbox 2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210 in task-service has been cleanup successfully" Feb 13 19:34:44.873043 containerd[1579]: time="2025-02-13T19:34:44.873027151Z" level=info msg="TearDown network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" successfully" Feb 13 19:34:44.873079 containerd[1579]: time="2025-02-13T19:34:44.873041668Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" returns successfully" Feb 13 19:34:44.873432 containerd[1579]: time="2025-02-13T19:34:44.873302308Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" Feb 13 19:34:44.873432 containerd[1579]: time="2025-02-13T19:34:44.873377760Z" level=info msg="TearDown network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" successfully" Feb 13 19:34:44.873432 containerd[1579]: time="2025-02-13T19:34:44.873386547Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" returns successfully" Feb 13 19:34:44.873668 containerd[1579]: time="2025-02-13T19:34:44.873642017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:34:45.011861 systemd[1]: run-netns-cni\x2d8ae1e04a\x2d3a41\x2dbfc9\x2d362d\x2dbc8344b7928a.mount: Deactivated successfully. Feb 13 19:34:45.012057 systemd[1]: run-netns-cni\x2d8fbb1b50\x2d129f\x2dc1cb\x2dce8b\x2dbff8c7919eb8.mount: Deactivated successfully. Feb 13 19:34:45.012194 systemd[1]: run-netns-cni\x2d222c69c5\x2d3cd8\x2dcbf2\x2d11da\x2d9ec79fcffdd9.mount: Deactivated successfully. Feb 13 19:34:45.012345 systemd[1]: run-netns-cni\x2d669995a5\x2d7edb\x2d007c\x2dabda\x2d40659c193089.mount: Deactivated successfully. Feb 13 19:34:45.012485 systemd[1]: run-netns-cni\x2dfc4ba362\x2d6627\x2daac0\x2dec73\x2d08e4f303af09.mount: Deactivated successfully. Feb 13 19:34:45.307216 containerd[1579]: time="2025-02-13T19:34:45.306941922Z" level=error msg="Failed to destroy network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.308028 containerd[1579]: time="2025-02-13T19:34:45.308001745Z" level=error msg="encountered an error cleaning up failed sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.308230 containerd[1579]: time="2025-02-13T19:34:45.308186533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.309099 kubelet[2796]: E0213 19:34:45.308857 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.309099 kubelet[2796]: E0213 19:34:45.308930 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:45.309099 kubelet[2796]: E0213 19:34:45.308959 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:45.309241 kubelet[2796]: E0213 19:34:45.309013 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" podUID="66ee47e8-47e1-43f0-b3d2-64715b2b7237" Feb 13 19:34:45.326836 containerd[1579]: time="2025-02-13T19:34:45.325866872Z" level=error msg="Failed to destroy network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.326836 containerd[1579]: time="2025-02-13T19:34:45.326449027Z" level=error msg="encountered an error cleaning up failed sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.326836 containerd[1579]: time="2025-02-13T19:34:45.326520061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.327118 kubelet[2796]: E0213 19:34:45.327041 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.327118 kubelet[2796]: E0213 19:34:45.327111 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:45.327222 kubelet[2796]: E0213 19:34:45.327133 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:45.327222 kubelet[2796]: E0213 19:34:45.327186 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" podUID="23896514-92a6-4ab0-b171-3b7efd7da770" Feb 13 19:34:45.338239 containerd[1579]: time="2025-02-13T19:34:45.338170180Z" level=error msg="Failed to destroy network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.338992 containerd[1579]: time="2025-02-13T19:34:45.338951018Z" level=error msg="encountered an error cleaning up failed sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.339154 containerd[1579]: time="2025-02-13T19:34:45.339114936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.340732 containerd[1579]: time="2025-02-13T19:34:45.339298872Z" level=error msg="Failed to destroy network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.340732 containerd[1579]: time="2025-02-13T19:34:45.340144563Z" level=error msg="encountered an error cleaning up failed sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.340732 containerd[1579]: time="2025-02-13T19:34:45.340192623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.340925 kubelet[2796]: E0213 19:34:45.339541 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.340925 kubelet[2796]: E0213 19:34:45.339607 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:45.340925 kubelet[2796]: E0213 19:34:45.339630 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:45.341023 kubelet[2796]: E0213 19:34:45.339675 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-krffx" podUID="7703fc78-f952-4a15-b2f4-c2b67bf6b32a" Feb 13 19:34:45.341985 containerd[1579]: time="2025-02-13T19:34:45.341925322Z" level=error msg="Failed to destroy network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342050 kubelet[2796]: E0213 19:34:45.341992 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342097 kubelet[2796]: E0213 19:34:45.342052 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:45.342097 kubelet[2796]: E0213 19:34:45.342074 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:45.342142 containerd[1579]: time="2025-02-13T19:34:45.341930411Z" level=error msg="Failed to destroy network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342169 kubelet[2796]: E0213 19:34:45.342132 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" podUID="7ddf3646-1cea-4076-a090-fff52499412c" Feb 13 19:34:45.342600 containerd[1579]: time="2025-02-13T19:34:45.342559585Z" level=error msg="encountered an error cleaning up failed sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342638 containerd[1579]: time="2025-02-13T19:34:45.342615199Z" level=error msg="encountered an error cleaning up failed sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342710 containerd[1579]: time="2025-02-13T19:34:45.342657078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342763 containerd[1579]: time="2025-02-13T19:34:45.342626029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342929 kubelet[2796]: E0213 19:34:45.342893 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342970 kubelet[2796]: E0213 19:34:45.342921 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:45.342992 kubelet[2796]: E0213 19:34:45.342977 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:45.343018 kubelet[2796]: E0213 19:34:45.343002 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:45.343097 kubelet[2796]: E0213 19:34:45.343047 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-97hdm" podUID="3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6" Feb 13 19:34:45.343097 kubelet[2796]: E0213 19:34:45.342977 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:45.343179 kubelet[2796]: E0213 19:34:45.343102 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:45.343205 kubelet[2796]: E0213 19:34:45.343154 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dwgr8_calico-system(58082a0b-a7e3-4696-a1fa-c41d6d0bc84c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dwgr8_calico-system(58082a0b-a7e3-4696-a1fa-c41d6d0bc84c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:45.801187 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:50928.service - OpenSSH per-connection server daemon (10.0.0.1:50928). Feb 13 19:34:45.849790 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 50928 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:34:45.851698 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:45.857120 systemd-logind[1555]: New session 10 of user core. Feb 13 19:34:45.864336 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:34:45.881170 kubelet[2796]: I0213 19:34:45.881104 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8" Feb 13 19:34:45.882204 containerd[1579]: time="2025-02-13T19:34:45.881945868Z" level=info msg="StopPodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\"" Feb 13 19:34:45.882204 containerd[1579]: time="2025-02-13T19:34:45.882165372Z" level=info msg="Ensure that sandbox 552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8 in task-service has been cleanup successfully" Feb 13 19:34:45.882541 containerd[1579]: time="2025-02-13T19:34:45.882346052Z" level=info msg="TearDown network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" successfully" Feb 13 19:34:45.882541 containerd[1579]: time="2025-02-13T19:34:45.882357213Z" level=info msg="StopPodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" returns successfully" Feb 13 19:34:45.882713 containerd[1579]: time="2025-02-13T19:34:45.882635165Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\"" Feb 13 19:34:45.883059 containerd[1579]: time="2025-02-13T19:34:45.883032342Z" level=info msg="TearDown network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" successfully" Feb 13 19:34:45.883059 containerd[1579]: time="2025-02-13T19:34:45.883051418Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" returns successfully" Feb 13 19:34:45.883657 containerd[1579]: time="2025-02-13T19:34:45.883266863Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" Feb 13 19:34:45.883657 containerd[1579]: time="2025-02-13T19:34:45.883338417Z" level=info msg="TearDown network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" successfully" Feb 13 19:34:45.883657 containerd[1579]: time="2025-02-13T19:34:45.883347595Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" returns successfully" Feb 13 19:34:45.883752 kubelet[2796]: I0213 19:34:45.883282 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20" Feb 13 19:34:45.883991 containerd[1579]: time="2025-02-13T19:34:45.883963934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:3,}" Feb 13 19:34:45.884035 containerd[1579]: time="2025-02-13T19:34:45.884008798Z" level=info msg="StopPodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\"" Feb 13 19:34:45.884648 containerd[1579]: time="2025-02-13T19:34:45.884504441Z" level=info msg="Ensure that sandbox 0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20 in task-service has been cleanup successfully" Feb 13 19:34:45.885368 containerd[1579]: time="2025-02-13T19:34:45.884841925Z" level=info msg="TearDown network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" successfully" Feb 13 19:34:45.885368 containerd[1579]: time="2025-02-13T19:34:45.884854008Z" level=info msg="StopPodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" returns successfully" Feb 13 19:34:45.885368 containerd[1579]: time="2025-02-13T19:34:45.885137060Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\"" Feb 13 19:34:45.885368 containerd[1579]: time="2025-02-13T19:34:45.885202524Z" level=info msg="TearDown network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" successfully" Feb 13 19:34:45.885368 containerd[1579]: time="2025-02-13T19:34:45.885211500Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" returns successfully" Feb 13 19:34:45.885490 containerd[1579]: time="2025-02-13T19:34:45.885388583Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" Feb 13 19:34:45.885490 containerd[1579]: time="2025-02-13T19:34:45.885456330Z" level=info msg="TearDown network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" successfully" Feb 13 19:34:45.885490 containerd[1579]: time="2025-02-13T19:34:45.885464005Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" returns successfully" Feb 13 19:34:45.885892 kubelet[2796]: E0213 19:34:45.885853 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:45.886352 containerd[1579]: time="2025-02-13T19:34:45.886130308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:3,}" Feb 13 19:34:45.886774 kubelet[2796]: I0213 19:34:45.886747 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd" Feb 13 19:34:45.887620 containerd[1579]: time="2025-02-13T19:34:45.887499163Z" level=info msg="StopPodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\"" Feb 13 19:34:45.887683 containerd[1579]: time="2025-02-13T19:34:45.887660255Z" level=info msg="Ensure that sandbox f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd in task-service has been cleanup successfully" Feb 13 19:34:45.887874 containerd[1579]: time="2025-02-13T19:34:45.887849131Z" level=info msg="TearDown network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" successfully" Feb 13 19:34:45.887874 containerd[1579]: time="2025-02-13T19:34:45.887866914Z" level=info msg="StopPodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" returns successfully" Feb 13 19:34:45.888456 containerd[1579]: time="2025-02-13T19:34:45.888437146Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\"" Feb 13 19:34:45.888534 containerd[1579]: time="2025-02-13T19:34:45.888516786Z" level=info msg="TearDown network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" successfully" Feb 13 19:34:45.888534 containerd[1579]: time="2025-02-13T19:34:45.888531022Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" returns successfully" Feb 13 19:34:45.888934 containerd[1579]: time="2025-02-13T19:34:45.888719117Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" Feb 13 19:34:45.888934 containerd[1579]: time="2025-02-13T19:34:45.888819295Z" level=info msg="TearDown network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" successfully" Feb 13 19:34:45.888934 containerd[1579]: time="2025-02-13T19:34:45.888829564Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" returns successfully" Feb 13 19:34:45.889015 kubelet[2796]: E0213 19:34:45.888978 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:45.889185 containerd[1579]: time="2025-02-13T19:34:45.889164254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:3,}" Feb 13 19:34:45.889315 kubelet[2796]: I0213 19:34:45.889284 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56" Feb 13 19:34:45.889823 containerd[1579]: time="2025-02-13T19:34:45.889778859Z" level=info msg="StopPodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\"" Feb 13 19:34:45.890207 containerd[1579]: time="2025-02-13T19:34:45.890120852Z" level=info msg="Ensure that sandbox 5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56 in task-service has been cleanup successfully" Feb 13 19:34:45.890441 containerd[1579]: time="2025-02-13T19:34:45.890407311Z" level=info msg="TearDown network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" successfully" Feb 13 19:34:45.890441 containerd[1579]: time="2025-02-13T19:34:45.890425896Z" level=info msg="StopPodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" returns successfully" Feb 13 19:34:45.891112 containerd[1579]: time="2025-02-13T19:34:45.890974358Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\"" Feb 13 19:34:45.891212 containerd[1579]: time="2025-02-13T19:34:45.891183060Z" level=info msg="TearDown network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" successfully" Feb 13 19:34:45.891212 containerd[1579]: time="2025-02-13T19:34:45.891198689Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" returns successfully" Feb 13 19:34:45.891302 kubelet[2796]: I0213 19:34:45.891280 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12" Feb 13 19:34:45.891574 containerd[1579]: time="2025-02-13T19:34:45.891546904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:2,}" Feb 13 19:34:45.891859 containerd[1579]: time="2025-02-13T19:34:45.891822082Z" level=info msg="StopPodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\"" Feb 13 19:34:45.892094 containerd[1579]: time="2025-02-13T19:34:45.892069217Z" level=info msg="Ensure that sandbox 9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12 in task-service has been cleanup successfully" Feb 13 19:34:45.892428 containerd[1579]: time="2025-02-13T19:34:45.892403876Z" level=info msg="TearDown network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" successfully" Feb 13 19:34:45.892428 containerd[1579]: time="2025-02-13T19:34:45.892426488Z" level=info msg="StopPodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" returns successfully" Feb 13 19:34:45.892928 containerd[1579]: time="2025-02-13T19:34:45.892702528Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\"" Feb 13 19:34:45.892928 containerd[1579]: time="2025-02-13T19:34:45.892820410Z" level=info msg="TearDown network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" successfully" Feb 13 19:34:45.892928 containerd[1579]: time="2025-02-13T19:34:45.892831401Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" returns successfully" Feb 13 19:34:45.894182 containerd[1579]: time="2025-02-13T19:34:45.894162383Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" Feb 13 19:34:45.894307 containerd[1579]: time="2025-02-13T19:34:45.894248486Z" level=info msg="TearDown network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" successfully" Feb 13 19:34:45.894307 containerd[1579]: time="2025-02-13T19:34:45.894258374Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" returns successfully" Feb 13 19:34:45.894712 containerd[1579]: time="2025-02-13T19:34:45.894689094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:34:45.895151 kubelet[2796]: I0213 19:34:45.894992 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236" Feb 13 19:34:45.895846 containerd[1579]: time="2025-02-13T19:34:45.895536979Z" level=info msg="StopPodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\"" Feb 13 19:34:45.895846 containerd[1579]: time="2025-02-13T19:34:45.895697931Z" level=info msg="Ensure that sandbox a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236 in task-service has been cleanup successfully" Feb 13 19:34:45.896063 containerd[1579]: time="2025-02-13T19:34:45.896043832Z" level=info msg="TearDown network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" successfully" Feb 13 19:34:45.896157 containerd[1579]: time="2025-02-13T19:34:45.896144961Z" level=info msg="StopPodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" returns successfully" Feb 13 19:34:45.896632 containerd[1579]: time="2025-02-13T19:34:45.896616589Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\"" Feb 13 19:34:45.896861 containerd[1579]: time="2025-02-13T19:34:45.896846641Z" level=info msg="TearDown network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" successfully" Feb 13 19:34:45.896921 containerd[1579]: time="2025-02-13T19:34:45.896910672Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" returns successfully" Feb 13 19:34:45.897187 containerd[1579]: time="2025-02-13T19:34:45.897171452Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" Feb 13 19:34:45.897373 containerd[1579]: time="2025-02-13T19:34:45.897360057Z" level=info msg="TearDown network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" successfully" Feb 13 19:34:45.897426 containerd[1579]: time="2025-02-13T19:34:45.897415791Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" returns successfully" Feb 13 19:34:45.897831 containerd[1579]: time="2025-02-13T19:34:45.897813410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:34:46.012093 systemd[1]: run-netns-cni\x2d879bb479\x2d0c7c\x2d2303\x2d8af3\x2d2cf5161af937.mount: Deactivated successfully. Feb 13 19:34:46.012280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56-shm.mount: Deactivated successfully. Feb 13 19:34:46.012436 systemd[1]: run-netns-cni\x2d4115df84\x2d7afe\x2dadb9\x2d9d44\x2dc67c059bf2ec.mount: Deactivated successfully. Feb 13 19:34:46.012584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8-shm.mount: Deactivated successfully. Feb 13 19:34:46.439828 sshd[4295]: Connection closed by 10.0.0.1 port 50928 Feb 13 19:34:46.439172 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:46.445036 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:50928.service: Deactivated successfully. Feb 13 19:34:46.450777 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:34:46.452473 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:34:46.454027 systemd-logind[1555]: Removed session 10. Feb 13 19:34:46.535036 containerd[1579]: time="2025-02-13T19:34:46.534847644Z" level=error msg="Failed to destroy network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.535440 containerd[1579]: time="2025-02-13T19:34:46.535419089Z" level=error msg="encountered an error cleaning up failed sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.535549 containerd[1579]: time="2025-02-13T19:34:46.535532231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.536330 kubelet[2796]: E0213 19:34:46.535980 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.536330 kubelet[2796]: E0213 19:34:46.536036 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:46.536330 kubelet[2796]: E0213 19:34:46.536056 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:46.536445 kubelet[2796]: E0213 19:34:46.536094 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-krffx" podUID="7703fc78-f952-4a15-b2f4-c2b67bf6b32a" Feb 13 19:34:46.549295 containerd[1579]: time="2025-02-13T19:34:46.549137593Z" level=error msg="Failed to destroy network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.549781 containerd[1579]: time="2025-02-13T19:34:46.549761556Z" level=error msg="encountered an error cleaning up failed sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.549913 containerd[1579]: time="2025-02-13T19:34:46.549894386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.550255 kubelet[2796]: E0213 19:34:46.550219 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.551181 kubelet[2796]: E0213 19:34:46.550431 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:46.551181 kubelet[2796]: E0213 19:34:46.550455 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:46.551181 kubelet[2796]: E0213 19:34:46.550506 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dwgr8_calico-system(58082a0b-a7e3-4696-a1fa-c41d6d0bc84c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dwgr8_calico-system(58082a0b-a7e3-4696-a1fa-c41d6d0bc84c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:46.564904 containerd[1579]: time="2025-02-13T19:34:46.564823968Z" level=error msg="Failed to destroy network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.565326 containerd[1579]: time="2025-02-13T19:34:46.565291276Z" level=error msg="encountered an error cleaning up failed sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.565424 containerd[1579]: time="2025-02-13T19:34:46.565356118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.566002 kubelet[2796]: E0213 19:34:46.565573 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.566002 kubelet[2796]: E0213 19:34:46.565638 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:46.566002 kubelet[2796]: E0213 19:34:46.565663 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:46.566181 kubelet[2796]: E0213 19:34:46.565709 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" podUID="7ddf3646-1cea-4076-a090-fff52499412c" Feb 13 19:34:46.568180 containerd[1579]: time="2025-02-13T19:34:46.568119314Z" level=error msg="Failed to destroy network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.569338 containerd[1579]: time="2025-02-13T19:34:46.569181090Z" level=error msg="encountered an error cleaning up failed sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.569338 containerd[1579]: time="2025-02-13T19:34:46.569238037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.569503 kubelet[2796]: E0213 19:34:46.569432 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.569503 kubelet[2796]: E0213 19:34:46.569478 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:46.569503 kubelet[2796]: E0213 19:34:46.569496 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:46.569675 kubelet[2796]: E0213 19:34:46.569534 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" podUID="23896514-92a6-4ab0-b171-3b7efd7da770" Feb 13 19:34:46.579156 containerd[1579]: time="2025-02-13T19:34:46.579117603Z" level=error msg="Failed to destroy network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.580104 containerd[1579]: time="2025-02-13T19:34:46.580076767Z" level=error msg="encountered an error cleaning up failed sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.580199 containerd[1579]: time="2025-02-13T19:34:46.580136459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.580358 kubelet[2796]: E0213 19:34:46.580308 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.580503 kubelet[2796]: E0213 19:34:46.580460 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:46.580503 kubelet[2796]: E0213 19:34:46.580483 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:46.580618 kubelet[2796]: E0213 19:34:46.580556 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-97hdm" podUID="3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6" Feb 13 19:34:46.583518 containerd[1579]: time="2025-02-13T19:34:46.583488742Z" level=error msg="Failed to destroy network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.583941 containerd[1579]: time="2025-02-13T19:34:46.583897681Z" level=error msg="encountered an error cleaning up failed sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.584006 containerd[1579]: time="2025-02-13T19:34:46.583938978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.584086 kubelet[2796]: E0213 19:34:46.584050 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:46.584086 kubelet[2796]: E0213 19:34:46.584079 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:46.584188 kubelet[2796]: E0213 19:34:46.584094 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:46.584188 kubelet[2796]: E0213 19:34:46.584128 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" podUID="66ee47e8-47e1-43f0-b3d2-64715b2b7237" Feb 13 19:34:46.899002 kubelet[2796]: I0213 19:34:46.898852 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7" Feb 13 19:34:46.900203 containerd[1579]: time="2025-02-13T19:34:46.900133586Z" level=info msg="StopPodSandbox for \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\"" Feb 13 19:34:46.900702 containerd[1579]: time="2025-02-13T19:34:46.900342178Z" level=info msg="Ensure that sandbox 903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7 in task-service has been cleanup successfully" Feb 13 19:34:46.900702 containerd[1579]: time="2025-02-13T19:34:46.900548616Z" level=info msg="TearDown network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\" successfully" Feb 13 19:34:46.900702 containerd[1579]: time="2025-02-13T19:34:46.900559877Z" level=info msg="StopPodSandbox for \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\" returns successfully" Feb 13 19:34:46.900795 kubelet[2796]: I0213 19:34:46.900669 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09" Feb 13 19:34:46.900950 containerd[1579]: time="2025-02-13T19:34:46.900844954Z" level=info msg="StopPodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\"" Feb 13 19:34:46.901200 containerd[1579]: time="2025-02-13T19:34:46.901177568Z" level=info msg="TearDown network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" successfully" Feb 13 19:34:46.901200 containerd[1579]: time="2025-02-13T19:34:46.901196154Z" level=info msg="StopPodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" returns successfully" Feb 13 19:34:46.901649 containerd[1579]: time="2025-02-13T19:34:46.901597208Z" level=info msg="StopPodSandbox for \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\"" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.901771185Z" level=info msg="Ensure that sandbox d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09 in task-service has been cleanup successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.901961223Z" level=info msg="TearDown network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\" successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.901973355Z" level=info msg="StopPodSandbox for \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\" returns successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902111014Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\"" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902180795Z" level=info msg="TearDown network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902222413Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" returns successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902316861Z" level=info msg="StopPodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\"" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902376463Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902389137Z" level=info msg="TearDown network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902398064Z" level=info msg="StopPodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" returns successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902446154Z" level=info msg="TearDown network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902455772Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" returns successfully" Feb 13 19:34:46.903192 containerd[1579]: time="2025-02-13T19:34:46.902965040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:4,}" Feb 13 19:34:46.903573 kubelet[2796]: E0213 19:34:46.902603 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:46.903573 kubelet[2796]: I0213 19:34:46.903288 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6" Feb 13 19:34:46.903628 containerd[1579]: time="2025-02-13T19:34:46.903255846Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\"" Feb 13 19:34:46.903628 containerd[1579]: time="2025-02-13T19:34:46.903351997Z" level=info msg="TearDown network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" successfully" Feb 13 19:34:46.903628 containerd[1579]: time="2025-02-13T19:34:46.903361485Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" returns successfully" Feb 13 19:34:46.903628 containerd[1579]: time="2025-02-13T19:34:46.903622886Z" level=info msg="StopPodSandbox for \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\"" Feb 13 19:34:46.903823 containerd[1579]: time="2025-02-13T19:34:46.903780132Z" level=info msg="Ensure that sandbox 11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6 in task-service has been cleanup successfully" Feb 13 19:34:46.903979 containerd[1579]: time="2025-02-13T19:34:46.903960722Z" level=info msg="TearDown network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\" successfully" Feb 13 19:34:46.904006 containerd[1579]: time="2025-02-13T19:34:46.903976712Z" level=info msg="StopPodSandbox for \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\" returns successfully" Feb 13 19:34:46.904031 containerd[1579]: time="2025-02-13T19:34:46.903975760Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" Feb 13 19:34:46.904094 containerd[1579]: time="2025-02-13T19:34:46.904075087Z" level=info msg="TearDown network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" successfully" Feb 13 19:34:46.904094 containerd[1579]: time="2025-02-13T19:34:46.904090757Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" returns successfully" Feb 13 19:34:46.904226 kubelet[2796]: E0213 19:34:46.904207 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:46.904457 containerd[1579]: time="2025-02-13T19:34:46.904434443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:4,}" Feb 13 19:34:46.904577 containerd[1579]: time="2025-02-13T19:34:46.904552945Z" level=info msg="StopPodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\"" Feb 13 19:34:46.905385 kubelet[2796]: I0213 19:34:46.905364 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec" Feb 13 19:34:46.905787 containerd[1579]: time="2025-02-13T19:34:46.905762650Z" level=info msg="StopPodSandbox for \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\"" Feb 13 19:34:46.906218 containerd[1579]: time="2025-02-13T19:34:46.905930615Z" level=info msg="Ensure that sandbox b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec in task-service has been cleanup successfully" Feb 13 19:34:46.906218 containerd[1579]: time="2025-02-13T19:34:46.906093582Z" level=info msg="TearDown network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\" successfully" Feb 13 19:34:46.906218 containerd[1579]: time="2025-02-13T19:34:46.906117777Z" level=info msg="StopPodSandbox for \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\" returns successfully" Feb 13 19:34:46.906380 containerd[1579]: time="2025-02-13T19:34:46.906355104Z" level=info msg="StopPodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\"" Feb 13 19:34:46.906455 containerd[1579]: time="2025-02-13T19:34:46.906435695Z" level=info msg="TearDown network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" successfully" Feb 13 19:34:46.906455 containerd[1579]: time="2025-02-13T19:34:46.906450122Z" level=info msg="StopPodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" returns successfully" Feb 13 19:34:46.908112 containerd[1579]: time="2025-02-13T19:34:46.908083965Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\"" Feb 13 19:34:46.908182 containerd[1579]: time="2025-02-13T19:34:46.908163714Z" level=info msg="TearDown network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" successfully" Feb 13 19:34:46.908212 containerd[1579]: time="2025-02-13T19:34:46.908180115Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" returns successfully" Feb 13 19:34:46.909560 containerd[1579]: time="2025-02-13T19:34:46.909533300Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" Feb 13 19:34:46.909636 containerd[1579]: time="2025-02-13T19:34:46.909616637Z" level=info msg="TearDown network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" successfully" Feb 13 19:34:46.909636 containerd[1579]: time="2025-02-13T19:34:46.909633017Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" returns successfully" Feb 13 19:34:46.912302 containerd[1579]: time="2025-02-13T19:34:46.912277199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:34:46.914749 kubelet[2796]: I0213 19:34:46.914689 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e" Feb 13 19:34:46.915751 containerd[1579]: time="2025-02-13T19:34:46.915724340Z" level=info msg="StopPodSandbox for \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\"" Feb 13 19:34:46.916032 containerd[1579]: time="2025-02-13T19:34:46.915938273Z" level=info msg="Ensure that sandbox 45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e in task-service has been cleanup successfully" Feb 13 19:34:46.916256 containerd[1579]: time="2025-02-13T19:34:46.916205786Z" level=info msg="TearDown network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\" successfully" Feb 13 19:34:46.916256 containerd[1579]: time="2025-02-13T19:34:46.916252474Z" level=info msg="StopPodSandbox for \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\" returns successfully" Feb 13 19:34:46.917212 containerd[1579]: time="2025-02-13T19:34:46.916651985Z" level=info msg="StopPodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\"" Feb 13 19:34:46.917212 containerd[1579]: time="2025-02-13T19:34:46.916763104Z" level=info msg="TearDown network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" successfully" Feb 13 19:34:46.917212 containerd[1579]: time="2025-02-13T19:34:46.916774666Z" level=info msg="StopPodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" returns successfully" Feb 13 19:34:46.917631 containerd[1579]: time="2025-02-13T19:34:46.917605337Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\"" Feb 13 19:34:46.917930 containerd[1579]: time="2025-02-13T19:34:46.917914309Z" level=info msg="TearDown network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" successfully" Feb 13 19:34:46.917983 containerd[1579]: time="2025-02-13T19:34:46.917972017Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" returns successfully" Feb 13 19:34:46.918392 kubelet[2796]: I0213 19:34:46.918370 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d" Feb 13 19:34:46.918883 containerd[1579]: time="2025-02-13T19:34:46.918449815Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" Feb 13 19:34:46.918883 containerd[1579]: time="2025-02-13T19:34:46.918521159Z" level=info msg="TearDown network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" successfully" Feb 13 19:34:46.918883 containerd[1579]: time="2025-02-13T19:34:46.918530026Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" returns successfully" Feb 13 19:34:46.919097 containerd[1579]: time="2025-02-13T19:34:46.919073378Z" level=info msg="StopPodSandbox for \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\"" Feb 13 19:34:46.919236 containerd[1579]: time="2025-02-13T19:34:46.919211457Z" level=info msg="Ensure that sandbox 04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d in task-service has been cleanup successfully" Feb 13 19:34:46.919347 containerd[1579]: time="2025-02-13T19:34:46.919213200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:34:46.919440 containerd[1579]: time="2025-02-13T19:34:46.919417876Z" level=info msg="TearDown network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\" successfully" Feb 13 19:34:46.919471 containerd[1579]: time="2025-02-13T19:34:46.919437322Z" level=info msg="StopPodSandbox for \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\" returns successfully" Feb 13 19:34:46.919661 containerd[1579]: time="2025-02-13T19:34:46.919638280Z" level=info msg="StopPodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\"" Feb 13 19:34:46.919760 containerd[1579]: time="2025-02-13T19:34:46.919730503Z" level=info msg="TearDown network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" successfully" Feb 13 19:34:46.919760 containerd[1579]: time="2025-02-13T19:34:46.919746192Z" level=info msg="StopPodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" returns successfully" Feb 13 19:34:46.920365 containerd[1579]: time="2025-02-13T19:34:46.920152256Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\"" Feb 13 19:34:46.920365 containerd[1579]: time="2025-02-13T19:34:46.920234050Z" level=info msg="TearDown network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" successfully" Feb 13 19:34:46.920365 containerd[1579]: time="2025-02-13T19:34:46.920245461Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" returns successfully" Feb 13 19:34:46.920474 containerd[1579]: time="2025-02-13T19:34:46.920455987Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" Feb 13 19:34:46.920603 containerd[1579]: time="2025-02-13T19:34:46.920582906Z" level=info msg="TearDown network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" successfully" Feb 13 19:34:46.920603 containerd[1579]: time="2025-02-13T19:34:46.920600509Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" returns successfully" Feb 13 19:34:46.920921 containerd[1579]: time="2025-02-13T19:34:46.920896575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:4,}" Feb 13 19:34:47.013286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6-shm.mount: Deactivated successfully. Feb 13 19:34:47.014036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d-shm.mount: Deactivated successfully. Feb 13 19:34:47.014214 systemd[1]: run-netns-cni\x2d1f62bb12\x2d987e\x2d236c\x2de41d\x2d586e4636a3c0.mount: Deactivated successfully. Feb 13 19:34:47.014368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e-shm.mount: Deactivated successfully. Feb 13 19:34:47.014506 systemd[1]: run-netns-cni\x2d340c5589\x2dc7ed\x2d8423\x2d85c9\x2d0024e9d5d792.mount: Deactivated successfully. Feb 13 19:34:47.014638 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09-shm.mount: Deactivated successfully. Feb 13 19:34:47.036655 containerd[1579]: time="2025-02-13T19:34:47.036588384Z" level=info msg="TearDown network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" successfully" Feb 13 19:34:47.036655 containerd[1579]: time="2025-02-13T19:34:47.036653276Z" level=info msg="StopPodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" returns successfully" Feb 13 19:34:47.037063 containerd[1579]: time="2025-02-13T19:34:47.037016729Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\"" Feb 13 19:34:47.037122 containerd[1579]: time="2025-02-13T19:34:47.037102661Z" level=info msg="TearDown network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" successfully" Feb 13 19:34:47.037122 containerd[1579]: time="2025-02-13T19:34:47.037114474Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" returns successfully" Feb 13 19:34:47.038219 containerd[1579]: time="2025-02-13T19:34:47.038193472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:3,}" Feb 13 19:34:47.768043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935894548.mount: Deactivated successfully. Feb 13 19:34:48.040656 containerd[1579]: time="2025-02-13T19:34:48.040522788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:48.053470 containerd[1579]: time="2025-02-13T19:34:48.053421953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:34:48.062657 containerd[1579]: time="2025-02-13T19:34:48.062615443Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:48.074830 containerd[1579]: time="2025-02-13T19:34:48.074699647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:48.098068 containerd[1579]: time="2025-02-13T19:34:48.098014560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.263438354s" Feb 13 19:34:48.098068 containerd[1579]: time="2025-02-13T19:34:48.098066037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:34:48.106239 containerd[1579]: time="2025-02-13T19:34:48.106200827Z" level=info msg="CreateContainer within sandbox \"19894ea7d6a4f230d7282755f2872cc89e08b68c343ee81557ac447474f43f79\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:34:48.145465 containerd[1579]: time="2025-02-13T19:34:48.145411889Z" level=info msg="CreateContainer within sandbox \"19894ea7d6a4f230d7282755f2872cc89e08b68c343ee81557ac447474f43f79\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1bad8b0d610a4f3a3ba3bd79ae176bdea4e7e79bbff1533d349bcecd6e140edd\"" Feb 13 19:34:48.146383 containerd[1579]: time="2025-02-13T19:34:48.146355653Z" level=info msg="StartContainer for \"1bad8b0d610a4f3a3ba3bd79ae176bdea4e7e79bbff1533d349bcecd6e140edd\"" Feb 13 19:34:48.157721 containerd[1579]: time="2025-02-13T19:34:48.157344859Z" level=error msg="Failed to destroy network for sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.158245 containerd[1579]: time="2025-02-13T19:34:48.158222358Z" level=error msg="encountered an error cleaning up failed sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.158519 containerd[1579]: time="2025-02-13T19:34:48.158496624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.159681 kubelet[2796]: E0213 19:34:48.159626 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.160174 kubelet[2796]: E0213 19:34:48.159702 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:48.160174 kubelet[2796]: E0213 19:34:48.159726 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-97hdm" Feb 13 19:34:48.160174 kubelet[2796]: E0213 19:34:48.159772 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-97hdm_kube-system(3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-97hdm" podUID="3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6" Feb 13 19:34:48.180363 containerd[1579]: time="2025-02-13T19:34:48.180307360Z" level=error msg="Failed to destroy network for sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.180794 containerd[1579]: time="2025-02-13T19:34:48.180680030Z" level=error msg="encountered an error cleaning up failed sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.180794 containerd[1579]: time="2025-02-13T19:34:48.180746735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.181019 kubelet[2796]: E0213 19:34:48.180980 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.181070 kubelet[2796]: E0213 19:34:48.181041 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:48.181070 kubelet[2796]: E0213 19:34:48.181065 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" Feb 13 19:34:48.181135 kubelet[2796]: E0213 19:34:48.181102 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-j5sxt_calico-apiserver(23896514-92a6-4ab0-b171-3b7efd7da770)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" podUID="23896514-92a6-4ab0-b171-3b7efd7da770" Feb 13 19:34:48.181474 containerd[1579]: time="2025-02-13T19:34:48.181439758Z" level=error msg="Failed to destroy network for sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.181956 containerd[1579]: time="2025-02-13T19:34:48.181912055Z" level=error msg="encountered an error cleaning up failed sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.182025 containerd[1579]: time="2025-02-13T19:34:48.181951449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.182106 kubelet[2796]: E0213 19:34:48.182057 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.182106 kubelet[2796]: E0213 19:34:48.182085 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:48.182106 kubelet[2796]: E0213 19:34:48.182102 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" Feb 13 19:34:48.182189 kubelet[2796]: E0213 19:34:48.182126 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87cc8ff8d-k264t_calico-apiserver(7ddf3646-1cea-4076-a090-fff52499412c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" podUID="7ddf3646-1cea-4076-a090-fff52499412c" Feb 13 19:34:48.188476 containerd[1579]: time="2025-02-13T19:34:48.188344526Z" level=error msg="Failed to destroy network for sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.188811 containerd[1579]: time="2025-02-13T19:34:48.188772501Z" level=error msg="encountered an error cleaning up failed sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.188862 containerd[1579]: time="2025-02-13T19:34:48.188836090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.189071 kubelet[2796]: E0213 19:34:48.189023 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.189300 kubelet[2796]: E0213 19:34:48.189166 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:48.189300 kubelet[2796]: E0213 19:34:48.189197 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwgr8" Feb 13 19:34:48.189300 kubelet[2796]: E0213 19:34:48.189251 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dwgr8_calico-system(58082a0b-a7e3-4696-a1fa-c41d6d0bc84c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dwgr8_calico-system(58082a0b-a7e3-4696-a1fa-c41d6d0bc84c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dwgr8" podUID="58082a0b-a7e3-4696-a1fa-c41d6d0bc84c" Feb 13 19:34:48.190671 containerd[1579]: time="2025-02-13T19:34:48.190640041Z" level=error msg="Failed to destroy network for sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.190992 containerd[1579]: time="2025-02-13T19:34:48.190969560Z" level=error msg="encountered an error cleaning up failed sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.191033 containerd[1579]: time="2025-02-13T19:34:48.191008884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.191219 kubelet[2796]: E0213 19:34:48.191133 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.191219 kubelet[2796]: E0213 19:34:48.191162 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:48.191219 kubelet[2796]: E0213 19:34:48.191177 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" Feb 13 19:34:48.191376 kubelet[2796]: E0213 19:34:48.191213 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d854b45db-vjxr2_calico-system(66ee47e8-47e1-43f0-b3d2-64715b2b7237)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" podUID="66ee47e8-47e1-43f0-b3d2-64715b2b7237" Feb 13 19:34:48.194251 containerd[1579]: time="2025-02-13T19:34:48.194197618Z" level=error msg="Failed to destroy network for sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.194612 containerd[1579]: time="2025-02-13T19:34:48.194583092Z" level=error msg="encountered an error cleaning up failed sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.194659 containerd[1579]: time="2025-02-13T19:34:48.194639769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.194842 kubelet[2796]: E0213 19:34:48.194814 2796 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:48.194884 kubelet[2796]: E0213 19:34:48.194843 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:48.194884 kubelet[2796]: E0213 19:34:48.194859 2796 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krffx" Feb 13 19:34:48.194934 kubelet[2796]: E0213 19:34:48.194894 2796 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-krffx_kube-system(7703fc78-f952-4a15-b2f4-c2b67bf6b32a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-krffx" podUID="7703fc78-f952-4a15-b2f4-c2b67bf6b32a" Feb 13 19:34:48.326816 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:34:48.326935 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Feb 13 19:34:48.354043 containerd[1579]: time="2025-02-13T19:34:48.353998673Z" level=info msg="StartContainer for \"1bad8b0d610a4f3a3ba3bd79ae176bdea4e7e79bbff1533d349bcecd6e140edd\" returns successfully" Feb 13 19:34:48.923722 kubelet[2796]: I0213 19:34:48.923691 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837" Feb 13 19:34:48.924314 containerd[1579]: time="2025-02-13T19:34:48.924263455Z" level=info msg="StopPodSandbox for \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\"" Feb 13 19:34:48.924546 containerd[1579]: time="2025-02-13T19:34:48.924509648Z" level=info msg="Ensure that sandbox 0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837 in task-service has been cleanup successfully" Feb 13 19:34:48.924878 containerd[1579]: time="2025-02-13T19:34:48.924749709Z" level=info msg="TearDown network for sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\" successfully" Feb 13 19:34:48.924878 containerd[1579]: time="2025-02-13T19:34:48.924831503Z" level=info msg="StopPodSandbox for \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\" returns successfully" Feb 13 19:34:48.925186 containerd[1579]: time="2025-02-13T19:34:48.925056876Z" level=info msg="StopPodSandbox for \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\"" Feb 13 19:34:48.925186 containerd[1579]: time="2025-02-13T19:34:48.925156684Z" level=info msg="TearDown network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\" successfully" Feb 13 19:34:48.925186 containerd[1579]: time="2025-02-13T19:34:48.925178916Z" level=info msg="StopPodSandbox for \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\" returns successfully" Feb 13 19:34:48.925769 kubelet[2796]: I0213 19:34:48.925459 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c" Feb 13 19:34:48.925842 containerd[1579]: time="2025-02-13T19:34:48.925457660Z" level=info msg="StopPodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\"" Feb 13 19:34:48.925842 containerd[1579]: time="2025-02-13T19:34:48.925580931Z" level=info msg="TearDown network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" successfully" Feb 13 19:34:48.925842 containerd[1579]: time="2025-02-13T19:34:48.925595809Z" level=info msg="StopPodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" returns successfully" Feb 13 19:34:48.925842 containerd[1579]: time="2025-02-13T19:34:48.925793280Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\"" Feb 13 19:34:48.925972 containerd[1579]: time="2025-02-13T19:34:48.925916432Z" level=info msg="TearDown network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" successfully" Feb 13 19:34:48.925972 containerd[1579]: time="2025-02-13T19:34:48.925929707Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" returns successfully" Feb 13 19:34:48.926036 containerd[1579]: time="2025-02-13T19:34:48.926025647Z" level=info msg="StopPodSandbox for \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\"" Feb 13 19:34:48.926229 containerd[1579]: time="2025-02-13T19:34:48.926210294Z" level=info msg="Ensure that sandbox 4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c in task-service has been cleanup successfully" Feb 13 19:34:48.926830 containerd[1579]: time="2025-02-13T19:34:48.926514637Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" Feb 13 19:34:48.926830 containerd[1579]: time="2025-02-13T19:34:48.926610055Z" level=info msg="TearDown network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" successfully" Feb 13 19:34:48.926830 containerd[1579]: time="2025-02-13T19:34:48.926624683Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" returns successfully" Feb 13 19:34:48.926967 kubelet[2796]: E0213 19:34:48.926776 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:48.927158 containerd[1579]: time="2025-02-13T19:34:48.927038892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:5,}" Feb 13 19:34:48.927158 containerd[1579]: time="2025-02-13T19:34:48.927148258Z" level=info msg="TearDown network for sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\" successfully" Feb 13 19:34:48.927260 containerd[1579]: time="2025-02-13T19:34:48.927165450Z" level=info msg="StopPodSandbox for \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\" returns successfully" Feb 13 19:34:48.927637 containerd[1579]: time="2025-02-13T19:34:48.927610096Z" level=info msg="StopPodSandbox for \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\"" Feb 13 19:34:48.927742 containerd[1579]: time="2025-02-13T19:34:48.927713600Z" level=info msg="TearDown network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\" successfully" Feb 13 19:34:48.927742 containerd[1579]: time="2025-02-13T19:34:48.927731553Z" level=info msg="StopPodSandbox for \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\" returns successfully" Feb 13 19:34:48.927865 kubelet[2796]: I0213 19:34:48.927782 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8" Feb 13 19:34:48.928306 containerd[1579]: time="2025-02-13T19:34:48.928282369Z" level=info msg="StopPodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\"" Feb 13 19:34:48.928459 containerd[1579]: time="2025-02-13T19:34:48.928394359Z" level=info msg="TearDown network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" successfully" Feb 13 19:34:48.928459 containerd[1579]: time="2025-02-13T19:34:48.928437551Z" level=info msg="StopPodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" returns successfully" Feb 13 19:34:48.929404 containerd[1579]: time="2025-02-13T19:34:48.928402315Z" level=info msg="StopPodSandbox for \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\"" Feb 13 19:34:48.929404 containerd[1579]: time="2025-02-13T19:34:48.928684756Z" level=info msg="Ensure that sandbox e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8 in task-service has been cleanup successfully" Feb 13 19:34:48.929404 containerd[1579]: time="2025-02-13T19:34:48.928851689Z" level=info msg="TearDown network for sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\" successfully" Feb 13 19:34:48.929404 containerd[1579]: time="2025-02-13T19:34:48.928866848Z" level=info msg="StopPodSandbox for \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\" returns successfully" Feb 13 19:34:48.929404 containerd[1579]: time="2025-02-13T19:34:48.929208439Z" level=info msg="StopPodSandbox for \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\"" Feb 13 19:34:48.929404 containerd[1579]: time="2025-02-13T19:34:48.929264836Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\"" Feb 13 19:34:48.930008 containerd[1579]: time="2025-02-13T19:34:48.929299351Z" level=info msg="TearDown network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\" successfully" Feb 13 19:34:48.930008 containerd[1579]: time="2025-02-13T19:34:48.929737023Z" level=info msg="StopPodSandbox for \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\" returns successfully" Feb 13 19:34:48.930008 containerd[1579]: time="2025-02-13T19:34:48.929347702Z" level=info msg="TearDown network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" successfully" Feb 13 19:34:48.930008 containerd[1579]: time="2025-02-13T19:34:48.929783240Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" returns successfully" Feb 13 19:34:48.930268 containerd[1579]: time="2025-02-13T19:34:48.930246461Z" level=info msg="StopPodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\"" Feb 13 19:34:48.930371 containerd[1579]: time="2025-02-13T19:34:48.930327292Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" Feb 13 19:34:48.930458 containerd[1579]: time="2025-02-13T19:34:48.930438562Z" level=info msg="TearDown network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" successfully" Feb 13 19:34:48.930495 containerd[1579]: time="2025-02-13T19:34:48.930455413Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" returns successfully" Feb 13 19:34:48.930495 containerd[1579]: time="2025-02-13T19:34:48.930332903Z" level=info msg="TearDown network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" successfully" Feb 13 19:34:48.930549 containerd[1579]: time="2025-02-13T19:34:48.930493545Z" level=info msg="StopPodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" returns successfully" Feb 13 19:34:48.930750 kubelet[2796]: E0213 19:34:48.930727 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:48.931027 containerd[1579]: time="2025-02-13T19:34:48.931002182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:5,}" Feb 13 19:34:48.931221 containerd[1579]: time="2025-02-13T19:34:48.931010527Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\"" Feb 13 19:34:48.931345 containerd[1579]: time="2025-02-13T19:34:48.931315530Z" level=info msg="TearDown network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" successfully" Feb 13 19:34:48.931345 containerd[1579]: time="2025-02-13T19:34:48.931335918Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" returns successfully" Feb 13 19:34:48.931694 containerd[1579]: time="2025-02-13T19:34:48.931640561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:4,}" Feb 13 19:34:48.932727 kubelet[2796]: E0213 19:34:48.932637 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:48.933757 kubelet[2796]: I0213 19:34:48.933736 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41" Feb 13 19:34:48.934352 containerd[1579]: time="2025-02-13T19:34:48.934185895Z" level=info msg="StopPodSandbox for \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\"" Feb 13 19:34:48.934400 containerd[1579]: time="2025-02-13T19:34:48.934356666Z" level=info msg="Ensure that sandbox 38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41 in task-service has been cleanup successfully" Feb 13 19:34:48.934718 containerd[1579]: time="2025-02-13T19:34:48.934683922Z" level=info msg="TearDown network for sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\" successfully" Feb 13 19:34:48.934718 containerd[1579]: time="2025-02-13T19:34:48.934709891Z" level=info msg="StopPodSandbox for \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\" returns successfully" Feb 13 19:34:48.935023 containerd[1579]: time="2025-02-13T19:34:48.934959500Z" level=info msg="StopPodSandbox for \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\"" Feb 13 19:34:48.935075 containerd[1579]: time="2025-02-13T19:34:48.935047455Z" level=info msg="TearDown network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\" successfully" Feb 13 19:34:48.935075 containerd[1579]: time="2025-02-13T19:34:48.935059467Z" level=info msg="StopPodSandbox for \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\" returns successfully" Feb 13 19:34:48.936034 containerd[1579]: time="2025-02-13T19:34:48.935873998Z" level=info msg="StopPodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\"" Feb 13 19:34:48.936034 containerd[1579]: time="2025-02-13T19:34:48.935973696Z" level=info msg="TearDown network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" successfully" Feb 13 19:34:48.936034 containerd[1579]: time="2025-02-13T19:34:48.935990137Z" level=info msg="StopPodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" returns successfully" Feb 13 19:34:48.937028 containerd[1579]: time="2025-02-13T19:34:48.937008460Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\"" Feb 13 19:34:48.937243 containerd[1579]: time="2025-02-13T19:34:48.937161047Z" level=info msg="TearDown network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" successfully" Feb 13 19:34:48.937243 containerd[1579]: time="2025-02-13T19:34:48.937223485Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" returns successfully" Feb 13 19:34:48.937678 containerd[1579]: time="2025-02-13T19:34:48.937535201Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" Feb 13 19:34:48.937678 containerd[1579]: time="2025-02-13T19:34:48.937618688Z" level=info msg="TearDown network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" successfully" Feb 13 19:34:48.937678 containerd[1579]: time="2025-02-13T19:34:48.937629378Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" returns successfully" Feb 13 19:34:48.938352 containerd[1579]: time="2025-02-13T19:34:48.938100723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:34:48.939368 kubelet[2796]: I0213 19:34:48.939347 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117" Feb 13 19:34:48.940158 containerd[1579]: time="2025-02-13T19:34:48.939912800Z" level=info msg="StopPodSandbox for \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\"" Feb 13 19:34:48.940158 containerd[1579]: time="2025-02-13T19:34:48.940127554Z" level=info msg="Ensure that sandbox c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117 in task-service has been cleanup successfully" Feb 13 19:34:48.940395 containerd[1579]: time="2025-02-13T19:34:48.940324885Z" level=info msg="TearDown network for sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\" successfully" Feb 13 19:34:48.940395 containerd[1579]: time="2025-02-13T19:34:48.940344321Z" level=info msg="StopPodSandbox for \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\" returns successfully" Feb 13 19:34:48.940736 containerd[1579]: time="2025-02-13T19:34:48.940694820Z" level=info msg="StopPodSandbox for \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\"" Feb 13 19:34:48.941113 containerd[1579]: time="2025-02-13T19:34:48.940787494Z" level=info msg="TearDown network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\" successfully" Feb 13 19:34:48.941113 containerd[1579]: time="2025-02-13T19:34:48.940825726Z" level=info msg="StopPodSandbox for \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\" returns successfully" Feb 13 19:34:48.941427 containerd[1579]: time="2025-02-13T19:34:48.941347818Z" level=info msg="StopPodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\"" Feb 13 19:34:48.941513 containerd[1579]: time="2025-02-13T19:34:48.941451262Z" level=info msg="TearDown network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" successfully" Feb 13 19:34:48.941513 containerd[1579]: time="2025-02-13T19:34:48.941466771Z" level=info msg="StopPodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" returns successfully" Feb 13 19:34:48.942075 containerd[1579]: time="2025-02-13T19:34:48.941835955Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\"" Feb 13 19:34:48.942075 containerd[1579]: time="2025-02-13T19:34:48.941930151Z" level=info msg="TearDown network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" successfully" Feb 13 19:34:48.942075 containerd[1579]: time="2025-02-13T19:34:48.941942635Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" returns successfully" Feb 13 19:34:48.942423 containerd[1579]: time="2025-02-13T19:34:48.942360250Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" Feb 13 19:34:48.943692 containerd[1579]: time="2025-02-13T19:34:48.942756104Z" level=info msg="TearDown network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" successfully" Feb 13 19:34:48.943692 containerd[1579]: time="2025-02-13T19:34:48.942785599Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" returns successfully" Feb 13 19:34:48.943787 kubelet[2796]: I0213 19:34:48.943707 2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e" Feb 13 19:34:48.944004 containerd[1579]: time="2025-02-13T19:34:48.943957813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:34:48.944568 containerd[1579]: time="2025-02-13T19:34:48.944203595Z" level=info msg="StopPodSandbox for \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\"" Feb 13 19:34:48.944568 containerd[1579]: time="2025-02-13T19:34:48.944425522Z" level=info msg="Ensure that sandbox da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e in task-service has been cleanup successfully" Feb 13 19:34:48.944899 containerd[1579]: time="2025-02-13T19:34:48.944713002Z" level=info msg="TearDown network for sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\" successfully" Feb 13 19:34:48.944899 containerd[1579]: time="2025-02-13T19:34:48.944731597Z" level=info msg="StopPodSandbox for \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\" returns successfully" Feb 13 19:34:48.945123 containerd[1579]: time="2025-02-13T19:34:48.945038875Z" level=info msg="StopPodSandbox for \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\"" Feb 13 19:34:48.945163 containerd[1579]: time="2025-02-13T19:34:48.945138622Z" level=info msg="TearDown network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\" successfully" Feb 13 19:34:48.945163 containerd[1579]: time="2025-02-13T19:34:48.945152679Z" level=info msg="StopPodSandbox for \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\" returns successfully" Feb 13 19:34:48.945626 containerd[1579]: time="2025-02-13T19:34:48.945390486Z" level=info msg="StopPodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\"" Feb 13 19:34:48.945626 containerd[1579]: time="2025-02-13T19:34:48.945474955Z" level=info msg="TearDown network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" successfully" Feb 13 19:34:48.945626 containerd[1579]: time="2025-02-13T19:34:48.945487388Z" level=info msg="StopPodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" returns successfully" Feb 13 19:34:48.945930 containerd[1579]: time="2025-02-13T19:34:48.945858806Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\"" Feb 13 19:34:48.946018 containerd[1579]: time="2025-02-13T19:34:48.946000883Z" level=info msg="TearDown network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" successfully" Feb 13 19:34:48.946018 containerd[1579]: time="2025-02-13T19:34:48.946017895Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" returns successfully" Feb 13 19:34:48.946308 containerd[1579]: time="2025-02-13T19:34:48.946281490Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" Feb 13 19:34:48.946387 containerd[1579]: time="2025-02-13T19:34:48.946370709Z" level=info msg="TearDown network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" successfully" Feb 13 19:34:48.946428 containerd[1579]: time="2025-02-13T19:34:48.946386438Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" returns successfully" Feb 13 19:34:48.947319 containerd[1579]: time="2025-02-13T19:34:48.947150033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:5,}" Feb 13 19:34:49.050910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e-shm.mount: Deactivated successfully. Feb 13 19:34:49.051169 systemd[1]: run-netns-cni\x2dac0e21fc\x2dca08\x2d4b2f\x2d4d16\x2db65f842bb833.mount: Deactivated successfully. Feb 13 19:34:49.051341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117-shm.mount: Deactivated successfully. Feb 13 19:34:49.051511 systemd[1]: run-netns-cni\x2d6b0d695e\x2d210b\x2dae2f\x2ded51\x2d9ecd54ca3e0b.mount: Deactivated successfully. Feb 13 19:34:49.051687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41-shm.mount: Deactivated successfully. Feb 13 19:34:49.051882 systemd[1]: run-netns-cni\x2dca0d9ef0\x2d4d64\x2d5866\x2d1314\x2d111b6eac2b2c.mount: Deactivated successfully. Feb 13 19:34:49.052042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c-shm.mount: Deactivated successfully. Feb 13 19:34:49.052214 systemd[1]: run-netns-cni\x2de970459e\x2dcaa2\x2d6e2c\x2d3b7c\x2dc195ca4a0e9f.mount: Deactivated successfully. Feb 13 19:34:49.052376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837-shm.mount: Deactivated successfully. Feb 13 19:34:49.155018 kubelet[2796]: I0213 19:34:49.154952 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:49.155784 kubelet[2796]: E0213 19:34:49.155760 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:49.185852 kubelet[2796]: I0213 19:34:49.185404 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c7tpt" podStartSLOduration=2.144033966 podStartE2EDuration="23.185379301s" podCreationTimestamp="2025-02-13 19:34:26 +0000 UTC" firstStartedPulling="2025-02-13 19:34:27.0572676 +0000 UTC m=+20.093493302" lastFinishedPulling="2025-02-13 19:34:48.098612945 +0000 UTC m=+41.134838637" observedRunningTime="2025-02-13 19:34:48.959664094 +0000 UTC m=+41.995889797" watchObservedRunningTime="2025-02-13 19:34:49.185379301 +0000 UTC m=+42.221605003" Feb 13 19:34:49.424998 systemd-networkd[1246]: caliddbea4d6c55: Link UP Feb 13 19:34:49.425834 systemd-networkd[1246]: caliddbea4d6c55: Gained carrier Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.257 [INFO][4864] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.275 [INFO][4864] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0 calico-kube-controllers-6d854b45db- calico-system 66ee47e8-47e1-43f0-b3d2-64715b2b7237 760 0 2025-02-13 19:34:26 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d854b45db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d854b45db-vjxr2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliddbea4d6c55 [] []}} ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Namespace="calico-system" Pod="calico-kube-controllers-6d854b45db-vjxr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.275 [INFO][4864] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Namespace="calico-system" Pod="calico-kube-controllers-6d854b45db-vjxr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.383 [INFO][4925] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" HandleID="k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Workload="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.392 [INFO][4925] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" HandleID="k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Workload="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eea90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d854b45db-vjxr2", "timestamp":"2025-02-13 19:34:49.383036735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.392 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.393 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.393 [INFO][4925] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.395 [INFO][4925] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.399 [INFO][4925] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.402 [INFO][4925] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.403 [INFO][4925] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.405 [INFO][4925] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.405 [INFO][4925] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.406 [INFO][4925] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.409 [INFO][4925] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.413 [INFO][4925] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.413 [INFO][4925] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" host="localhost" Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.413 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:49.438453 containerd[1579]: 2025-02-13 19:34:49.413 [INFO][4925] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" HandleID="k8s-pod-network.fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Workload="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" Feb 13 19:34:49.439883 containerd[1579]: 2025-02-13 19:34:49.417 [INFO][4864] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Namespace="calico-system" Pod="calico-kube-controllers-6d854b45db-vjxr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0", GenerateName:"calico-kube-controllers-6d854b45db-", Namespace:"calico-system", SelfLink:"", UID:"66ee47e8-47e1-43f0-b3d2-64715b2b7237", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d854b45db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d854b45db-vjxr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliddbea4d6c55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.439883 containerd[1579]: 2025-02-13 19:34:49.417 [INFO][4864] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Namespace="calico-system" Pod="calico-kube-controllers-6d854b45db-vjxr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" Feb 13 19:34:49.439883 containerd[1579]: 2025-02-13 19:34:49.417 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddbea4d6c55 ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Namespace="calico-system" Pod="calico-kube-controllers-6d854b45db-vjxr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" Feb 13 19:34:49.439883 containerd[1579]: 2025-02-13 19:34:49.426 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Namespace="calico-system" Pod="calico-kube-controllers-6d854b45db-vjxr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" Feb 13 19:34:49.439883 containerd[1579]: 2025-02-13 19:34:49.426 [INFO][4864] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Namespace="calico-system" Pod="calico-kube-controllers-6d854b45db-vjxr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0", GenerateName:"calico-kube-controllers-6d854b45db-", Namespace:"calico-system", SelfLink:"", UID:"66ee47e8-47e1-43f0-b3d2-64715b2b7237", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d854b45db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd", Pod:"calico-kube-controllers-6d854b45db-vjxr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliddbea4d6c55", MAC:"76:41:a2:0d:14:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.439883 containerd[1579]: 2025-02-13 19:34:49.434 [INFO][4864] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd" Namespace="calico-system" Pod="calico-kube-controllers-6d854b45db-vjxr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d854b45db--vjxr2-eth0" Feb 13 19:34:49.445575 systemd-networkd[1246]: calid64397d8bfa: Link UP Feb 13 19:34:49.447209 systemd-networkd[1246]: calid64397d8bfa: Gained carrier Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.287 [INFO][4872] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.301 [INFO][4872] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dwgr8-eth0 csi-node-driver- calico-system 58082a0b-a7e3-4696-a1fa-c41d6d0bc84c 609 0 2025-02-13 19:34:26 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dwgr8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid64397d8bfa [] []}} ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Namespace="calico-system" Pod="csi-node-driver-dwgr8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwgr8-" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.301 [INFO][4872] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Namespace="calico-system" Pod="csi-node-driver-dwgr8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwgr8-eth0" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.382 [INFO][4946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" HandleID="k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Workload="localhost-k8s-csi--node--driver--dwgr8-eth0" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.393 [INFO][4946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" HandleID="k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Workload="localhost-k8s-csi--node--driver--dwgr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001746c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dwgr8", "timestamp":"2025-02-13 19:34:49.38241192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.393 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.413 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.413 [INFO][4946] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.415 [INFO][4946] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.418 [INFO][4946] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.422 [INFO][4946] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.423 [INFO][4946] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.425 [INFO][4946] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.425 [INFO][4946] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.427 [INFO][4946] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.431 [INFO][4946] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.438 [INFO][4946] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.438 [INFO][4946] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" host="localhost" Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.438 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:49.460112 containerd[1579]: 2025-02-13 19:34:49.438 [INFO][4946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" HandleID="k8s-pod-network.031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Workload="localhost-k8s-csi--node--driver--dwgr8-eth0" Feb 13 19:34:49.460686 containerd[1579]: 2025-02-13 19:34:49.443 [INFO][4872] cni-plugin/k8s.go 386: Populated endpoint ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Namespace="calico-system" Pod="csi-node-driver-dwgr8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwgr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dwgr8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58082a0b-a7e3-4696-a1fa-c41d6d0bc84c", ResourceVersion:"609", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dwgr8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid64397d8bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.460686 containerd[1579]: 2025-02-13 19:34:49.443 [INFO][4872] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Namespace="calico-system" Pod="csi-node-driver-dwgr8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwgr8-eth0" Feb 13 19:34:49.460686 containerd[1579]: 2025-02-13 19:34:49.443 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid64397d8bfa ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Namespace="calico-system" Pod="csi-node-driver-dwgr8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwgr8-eth0" Feb 13 19:34:49.460686 containerd[1579]: 2025-02-13 19:34:49.446 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Namespace="calico-system" Pod="csi-node-driver-dwgr8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwgr8-eth0" Feb 13 19:34:49.460686 containerd[1579]: 2025-02-13 19:34:49.447 [INFO][4872] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Namespace="calico-system" Pod="csi-node-driver-dwgr8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwgr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dwgr8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58082a0b-a7e3-4696-a1fa-c41d6d0bc84c", ResourceVersion:"609", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c", Pod:"csi-node-driver-dwgr8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid64397d8bfa", MAC:"36:6b:46:50:c0:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.460686 containerd[1579]: 2025-02-13 19:34:49.456 [INFO][4872] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c" Namespace="calico-system" Pod="csi-node-driver-dwgr8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwgr8-eth0" Feb 13 19:34:49.478451 systemd-networkd[1246]: calic7b454cf799: Link UP Feb 13 19:34:49.478853 systemd-networkd[1246]: calic7b454cf799: Gained carrier Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.303 [INFO][4903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.317 [INFO][4903] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0 calico-apiserver-87cc8ff8d- calico-apiserver 7ddf3646-1cea-4076-a090-fff52499412c 763 0 2025-02-13 19:34:26 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87cc8ff8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-87cc8ff8d-k264t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7b454cf799 [] []}} ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-k264t" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.317 [INFO][4903] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-k264t" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.382 [INFO][4955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" HandleID="k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Workload="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.394 [INFO][4955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" HandleID="k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Workload="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290bb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-87cc8ff8d-k264t", "timestamp":"2025-02-13 19:34:49.382631894 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.394 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.438 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.438 [INFO][4955] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.442 [INFO][4955] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.447 [INFO][4955] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.456 [INFO][4955] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.458 [INFO][4955] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.462 [INFO][4955] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.462 [INFO][4955] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.464 [INFO][4955] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218 Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.468 [INFO][4955] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.472 [INFO][4955] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.472 [INFO][4955] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" host="localhost" Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.472 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:49.542375 containerd[1579]: 2025-02-13 19:34:49.472 [INFO][4955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" HandleID="k8s-pod-network.6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Workload="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" Feb 13 19:34:49.543108 containerd[1579]: 2025-02-13 19:34:49.475 [INFO][4903] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-k264t" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0", GenerateName:"calico-apiserver-87cc8ff8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ddf3646-1cea-4076-a090-fff52499412c", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87cc8ff8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-87cc8ff8d-k264t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7b454cf799", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.543108 containerd[1579]: 2025-02-13 19:34:49.475 [INFO][4903] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-k264t" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" Feb 13 19:34:49.543108 containerd[1579]: 2025-02-13 19:34:49.475 [INFO][4903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7b454cf799 ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-k264t" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" Feb 13 19:34:49.543108 containerd[1579]: 2025-02-13 19:34:49.478 [INFO][4903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-k264t" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" Feb 13 19:34:49.543108 containerd[1579]: 2025-02-13 19:34:49.479 [INFO][4903] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-k264t" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0", GenerateName:"calico-apiserver-87cc8ff8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ddf3646-1cea-4076-a090-fff52499412c", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87cc8ff8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218", Pod:"calico-apiserver-87cc8ff8d-k264t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7b454cf799", MAC:"46:6f:4f:d2:51:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.543108 containerd[1579]: 2025-02-13 19:34:49.539 [INFO][4903] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-k264t" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--k264t-eth0" Feb 13 19:34:49.872206 containerd[1579]: time="2025-02-13T19:34:49.872008955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:49.872514 containerd[1579]: time="2025-02-13T19:34:49.872180939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:49.872514 containerd[1579]: time="2025-02-13T19:34:49.872196027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.872772 containerd[1579]: time="2025-02-13T19:34:49.872642957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.888005 containerd[1579]: time="2025-02-13T19:34:49.884911333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:49.888350 containerd[1579]: time="2025-02-13T19:34:49.888307135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:49.888461 containerd[1579]: time="2025-02-13T19:34:49.888438302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.888989 containerd[1579]: time="2025-02-13T19:34:49.888949383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.901881 containerd[1579]: time="2025-02-13T19:34:49.901269255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:49.903678 containerd[1579]: time="2025-02-13T19:34:49.903567304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:49.903725 containerd[1579]: time="2025-02-13T19:34:49.903701797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.905781 containerd[1579]: time="2025-02-13T19:34:49.905719680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.919940 systemd-networkd[1246]: cali8e5e9f26654: Link UP Feb 13 19:34:49.921131 systemd-networkd[1246]: cali8e5e9f26654: Gained carrier Feb 13 19:34:49.937964 kernel: bpftool[5221]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.252 [INFO][4842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.275 [INFO][4842] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0 coredns-7db6d8ff4d- kube-system 3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6 761 0 2025-02-13 19:34:20 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-97hdm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8e5e9f26654 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-97hdm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--97hdm-" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.275 [INFO][4842] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-97hdm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.382 [INFO][4920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" HandleID="k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Workload="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.395 [INFO][4920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" HandleID="k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Workload="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388da0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-97hdm", "timestamp":"2025-02-13 19:34:49.38272556 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.395 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.472 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.473 [INFO][4920] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.474 [INFO][4920] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.836 [INFO][4920] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.842 [INFO][4920] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.843 [INFO][4920] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.848 [INFO][4920] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.848 [INFO][4920] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.851 [INFO][4920] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.861 [INFO][4920] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.876 [INFO][4920] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.876 [INFO][4920] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" host="localhost" Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.876 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:49.945771 containerd[1579]: 2025-02-13 19:34:49.876 [INFO][4920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" HandleID="k8s-pod-network.573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Workload="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" Feb 13 19:34:49.946330 containerd[1579]: 2025-02-13 19:34:49.904 [INFO][4842] cni-plugin/k8s.go 386: Populated endpoint ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-97hdm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 20, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-97hdm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e5e9f26654", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.946330 containerd[1579]: 2025-02-13 19:34:49.905 [INFO][4842] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-97hdm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" Feb 13 19:34:49.946330 containerd[1579]: 2025-02-13 19:34:49.905 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e5e9f26654 ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-97hdm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" Feb 13 19:34:49.946330 containerd[1579]: 2025-02-13 19:34:49.920 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-97hdm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" Feb 13 19:34:49.946330 containerd[1579]: 2025-02-13 19:34:49.921 [INFO][4842] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-97hdm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 20, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c", Pod:"coredns-7db6d8ff4d-97hdm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e5e9f26654", MAC:"3a:a7:d3:19:84:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.946330 containerd[1579]: 2025-02-13 19:34:49.937 [INFO][4842] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-97hdm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--97hdm-eth0" Feb 13 19:34:49.949598 kubelet[2796]: E0213 19:34:49.948623 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:49.951739 kubelet[2796]: E0213 19:34:49.950700 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:49.953579 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:49.961950 systemd-networkd[1246]: calied01a435fd7: Link UP Feb 13 19:34:49.963270 systemd-networkd[1246]: calied01a435fd7: Gained carrier Feb 13 19:34:49.975523 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.275 [INFO][4870] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.298 [INFO][4870] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0 calico-apiserver-87cc8ff8d- calico-apiserver 23896514-92a6-4ab0-b171-3b7efd7da770 762 0 2025-02-13 19:34:26 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87cc8ff8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-87cc8ff8d-j5sxt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied01a435fd7 [] []}} ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-j5sxt" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.298 [INFO][4870] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-j5sxt" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.382 [INFO][4939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" HandleID="k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Workload="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.396 [INFO][4939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" HandleID="k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Workload="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c8ab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-87cc8ff8d-j5sxt", "timestamp":"2025-02-13 19:34:49.382629008 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.396 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.878 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.879 [INFO][4939] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.884 [INFO][4939] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.898 [INFO][4939] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.917 [INFO][4939] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.920 [INFO][4939] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.925 [INFO][4939] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.925 [INFO][4939] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.937 [INFO][4939] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12 Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.942 [INFO][4939] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.949 [INFO][4939] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.950 [INFO][4939] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" host="localhost" Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.950 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:49.982741 containerd[1579]: 2025-02-13 19:34:49.950 [INFO][4939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" HandleID="k8s-pod-network.ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Workload="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" Feb 13 19:34:49.983316 containerd[1579]: 2025-02-13 19:34:49.957 [INFO][4870] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-j5sxt" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0", GenerateName:"calico-apiserver-87cc8ff8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"23896514-92a6-4ab0-b171-3b7efd7da770", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87cc8ff8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-87cc8ff8d-j5sxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied01a435fd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.983316 containerd[1579]: 2025-02-13 19:34:49.957 [INFO][4870] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-j5sxt" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" Feb 13 19:34:49.983316 containerd[1579]: 2025-02-13 19:34:49.957 [INFO][4870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied01a435fd7 ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-j5sxt" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" Feb 13 19:34:49.983316 containerd[1579]: 2025-02-13 19:34:49.964 [INFO][4870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-j5sxt" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" Feb 13 19:34:49.983316 containerd[1579]: 2025-02-13 19:34:49.964 [INFO][4870] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-j5sxt" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0", GenerateName:"calico-apiserver-87cc8ff8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"23896514-92a6-4ab0-b171-3b7efd7da770", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87cc8ff8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12", Pod:"calico-apiserver-87cc8ff8d-j5sxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied01a435fd7", MAC:"c2:86:48:d5:32:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.983316 containerd[1579]: 2025-02-13 19:34:49.974 [INFO][4870] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12" Namespace="calico-apiserver" Pod="calico-apiserver-87cc8ff8d-j5sxt" WorkloadEndpoint="localhost-k8s-calico--apiserver--87cc8ff8d--j5sxt-eth0" Feb 13 19:34:50.009354 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:50.026506 containerd[1579]: time="2025-02-13T19:34:50.026399209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwgr8,Uid:58082a0b-a7e3-4696-a1fa-c41d6d0bc84c,Namespace:calico-system,Attempt:4,} returns sandbox id \"031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c\"" Feb 13 19:34:50.029213 containerd[1579]: time="2025-02-13T19:34:50.029005707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:34:50.039357 containerd[1579]: time="2025-02-13T19:34:50.039271337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d854b45db-vjxr2,Uid:66ee47e8-47e1-43f0-b3d2-64715b2b7237,Namespace:calico-system,Attempt:5,} returns sandbox id \"fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd\"" Feb 13 19:34:50.053683 containerd[1579]: time="2025-02-13T19:34:50.051287977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:50.053683 containerd[1579]: time="2025-02-13T19:34:50.051352378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:50.053683 containerd[1579]: time="2025-02-13T19:34:50.051363459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:50.053683 containerd[1579]: time="2025-02-13T19:34:50.051450352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:50.068732 containerd[1579]: time="2025-02-13T19:34:50.067617571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:50.068732 containerd[1579]: time="2025-02-13T19:34:50.067676121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:50.068732 containerd[1579]: time="2025-02-13T19:34:50.067690268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:50.068732 containerd[1579]: time="2025-02-13T19:34:50.067792760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:50.073390 systemd-networkd[1246]: cali49901ba35b0: Link UP Feb 13 19:34:50.073615 systemd-networkd[1246]: cali49901ba35b0: Gained carrier Feb 13 19:34:50.076852 containerd[1579]: time="2025-02-13T19:34:50.076193045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-k264t,Uid:7ddf3646-1cea-4076-a090-fff52499412c,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218\"" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.235 [INFO][4853] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.257 [INFO][4853] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--krffx-eth0 coredns-7db6d8ff4d- kube-system 7703fc78-f952-4a15-b2f4-c2b67bf6b32a 757 0 2025-02-13 19:34:20 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-krffx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49901ba35b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krffx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--krffx-" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.257 [INFO][4853] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krffx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.392 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" HandleID="k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Workload="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.399 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" HandleID="k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Workload="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038ac30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-krffx", "timestamp":"2025-02-13 19:34:49.392363602 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.399 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.951 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.951 [INFO][4919] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.954 [INFO][4919] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.983 [INFO][4919] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:49.998 [INFO][4919] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.008 [INFO][4919] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.014 [INFO][4919] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.015 [INFO][4919] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.018 [INFO][4919] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.024 [INFO][4919] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.042 [INFO][4919] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.042 [INFO][4919] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" host="localhost" Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.042 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:50.100928 containerd[1579]: 2025-02-13 19:34:50.042 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" HandleID="k8s-pod-network.27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Workload="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" Feb 13 19:34:50.101489 containerd[1579]: 2025-02-13 19:34:50.066 [INFO][4853] cni-plugin/k8s.go 386: Populated endpoint ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krffx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--krffx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7703fc78-f952-4a15-b2f4-c2b67bf6b32a", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 20, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-krffx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49901ba35b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:50.101489 containerd[1579]: 2025-02-13 19:34:50.067 [INFO][4853] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krffx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" Feb 13 19:34:50.101489 containerd[1579]: 2025-02-13 19:34:50.067 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49901ba35b0 ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krffx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" Feb 13 19:34:50.101489 containerd[1579]: 2025-02-13 19:34:50.073 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krffx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" Feb 13 19:34:50.101489 containerd[1579]: 2025-02-13 19:34:50.074 [INFO][4853] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krffx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--krffx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7703fc78-f952-4a15-b2f4-c2b67bf6b32a", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 20, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f", Pod:"coredns-7db6d8ff4d-krffx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49901ba35b0", MAC:"32:8f:1a:90:60:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:50.101489 containerd[1579]: 2025-02-13 19:34:50.091 [INFO][4853] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krffx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--krffx-eth0" Feb 13 19:34:50.115518 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:50.120851 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:50.134590 containerd[1579]: time="2025-02-13T19:34:50.134499346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:50.137419 containerd[1579]: time="2025-02-13T19:34:50.134762801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:50.137513 containerd[1579]: time="2025-02-13T19:34:50.137406318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:50.139885 containerd[1579]: time="2025-02-13T19:34:50.138742770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:50.161015 containerd[1579]: time="2025-02-13T19:34:50.160972972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-97hdm,Uid:3ba87dae-8fd1-49d8-a5f3-d249a54d1dd6,Namespace:kube-system,Attempt:5,} returns sandbox id \"573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c\"" Feb 13 19:34:50.161852 kubelet[2796]: E0213 19:34:50.161811 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:50.163921 containerd[1579]: time="2025-02-13T19:34:50.163851861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cc8ff8d-j5sxt,Uid:23896514-92a6-4ab0-b171-3b7efd7da770,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12\"" Feb 13 19:34:50.167706 containerd[1579]: time="2025-02-13T19:34:50.167667080Z" level=info msg="CreateContainer within sandbox \"573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:34:50.173519 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:50.196744 containerd[1579]: time="2025-02-13T19:34:50.196698712Z" level=info msg="CreateContainer within sandbox \"573a58a4a8966006cd7c91c75216745387b61658c21ec164b29c0d932a90050c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cccb5f75f97f46edb65be620476b2a57a96d57e36583681c3a3370863e35f0c8\"" Feb 13 19:34:50.198983 containerd[1579]: time="2025-02-13T19:34:50.198951426Z" level=info msg="StartContainer for \"cccb5f75f97f46edb65be620476b2a57a96d57e36583681c3a3370863e35f0c8\"" Feb 13 19:34:50.209948 containerd[1579]: time="2025-02-13T19:34:50.209734457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krffx,Uid:7703fc78-f952-4a15-b2f4-c2b67bf6b32a,Namespace:kube-system,Attempt:5,} returns sandbox id \"27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f\"" Feb 13 19:34:50.210553 kubelet[2796]: E0213 19:34:50.210529 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:50.214387 containerd[1579]: time="2025-02-13T19:34:50.214361512Z" level=info msg="CreateContainer within sandbox \"27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:34:50.235285 containerd[1579]: time="2025-02-13T19:34:50.235223012Z" level=info msg="CreateContainer within sandbox \"27f1fe8320810a000de096d03d69395c9d880ad47ffd2d270527db57e51f1e4f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e0340c6bf5b673407e3fb707588fffd05d995c840a368d9174b4b061f8202a9\"" Feb 13 19:34:50.236786 containerd[1579]: time="2025-02-13T19:34:50.236762756Z" level=info msg="StartContainer for \"3e0340c6bf5b673407e3fb707588fffd05d995c840a368d9174b4b061f8202a9\"" Feb 13 19:34:50.280652 containerd[1579]: time="2025-02-13T19:34:50.280590170Z" level=info msg="StartContainer for \"cccb5f75f97f46edb65be620476b2a57a96d57e36583681c3a3370863e35f0c8\" returns successfully" Feb 13 19:34:50.286886 systemd-networkd[1246]: vxlan.calico: Link UP Feb 13 19:34:50.286895 systemd-networkd[1246]: vxlan.calico: Gained carrier Feb 13 19:34:50.314918 containerd[1579]: time="2025-02-13T19:34:50.314687501Z" level=info msg="StartContainer for \"3e0340c6bf5b673407e3fb707588fffd05d995c840a368d9174b4b061f8202a9\" returns successfully" Feb 13 19:34:50.825957 systemd-networkd[1246]: calid64397d8bfa: Gained IPv6LL Feb 13 19:34:50.890058 systemd-networkd[1246]: caliddbea4d6c55: Gained IPv6LL Feb 13 19:34:50.952786 kubelet[2796]: E0213 19:34:50.952718 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:50.953972 systemd-networkd[1246]: calic7b454cf799: Gained IPv6LL Feb 13 19:34:50.957099 kubelet[2796]: E0213 19:34:50.956859 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:50.961688 kubelet[2796]: I0213 19:34:50.961144 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-97hdm" podStartSLOduration=30.961131178 podStartE2EDuration="30.961131178s" podCreationTimestamp="2025-02-13 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:50.960948084 +0000 UTC m=+43.997173786" watchObservedRunningTime="2025-02-13 19:34:50.961131178 +0000 UTC m=+43.997356880" Feb 13 19:34:50.974002 kubelet[2796]: I0213 19:34:50.973913 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-krffx" podStartSLOduration=30.973818208 podStartE2EDuration="30.973818208s" podCreationTimestamp="2025-02-13 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:50.973467869 +0000 UTC m=+44.009693571" watchObservedRunningTime="2025-02-13 19:34:50.973818208 +0000 UTC m=+44.010043910" Feb 13 19:34:51.145968 systemd-networkd[1246]: cali8e5e9f26654: Gained IPv6LL Feb 13 19:34:51.337936 systemd-networkd[1246]: calied01a435fd7: Gained IPv6LL Feb 13 19:34:51.451025 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:36058.service - OpenSSH per-connection server daemon (10.0.0.1:36058). Feb 13 19:34:51.494794 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 36058 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:34:51.496671 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:51.500962 systemd-logind[1555]: New session 11 of user core. Feb 13 19:34:51.516061 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:34:51.772037 sshd[5606]: Connection closed by 10.0.0.1 port 36058 Feb 13 19:34:51.772433 sshd-session[5603]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:51.783111 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:36072.service - OpenSSH per-connection server daemon (10.0.0.1:36072). Feb 13 19:34:51.783648 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:36058.service: Deactivated successfully. Feb 13 19:34:51.786503 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:34:51.787611 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:34:51.788916 systemd-logind[1555]: Removed session 11. Feb 13 19:34:51.823998 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 36072 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:34:51.825630 sshd-session[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:51.831110 systemd-logind[1555]: New session 12 of user core. Feb 13 19:34:51.839270 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:34:51.916486 containerd[1579]: time="2025-02-13T19:34:51.916426614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:51.918387 containerd[1579]: time="2025-02-13T19:34:51.918354537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:34:51.919820 containerd[1579]: time="2025-02-13T19:34:51.919728608Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:51.921886 containerd[1579]: time="2025-02-13T19:34:51.921854011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:51.922515 containerd[1579]: time="2025-02-13T19:34:51.922445723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.893401503s" Feb 13 19:34:51.922515 containerd[1579]: time="2025-02-13T19:34:51.922475710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:34:51.925128 containerd[1579]: time="2025-02-13T19:34:51.924851283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:34:51.925352 containerd[1579]: time="2025-02-13T19:34:51.925325744Z" level=info msg="CreateContainer within sandbox \"031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:34:51.972697 kubelet[2796]: E0213 19:34:51.972657 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:51.973149 kubelet[2796]: E0213 19:34:51.972911 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:52.042009 systemd-networkd[1246]: vxlan.calico: Gained IPv6LL Feb 13 19:34:52.107002 systemd-networkd[1246]: cali49901ba35b0: Gained IPv6LL Feb 13 19:34:52.254230 sshd[5628]: Connection closed by 10.0.0.1 port 36072 Feb 13 19:34:52.255030 sshd-session[5616]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:52.260714 containerd[1579]: time="2025-02-13T19:34:52.260378232Z" level=info msg="CreateContainer within sandbox \"031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"642647ed8e1aefdd7f125045e8642bf056e698f91d0319a6dc060aecd499a1d7\"" Feb 13 19:34:52.263068 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:36080.service - OpenSSH per-connection server daemon (10.0.0.1:36080). Feb 13 19:34:52.267385 containerd[1579]: time="2025-02-13T19:34:52.263981380Z" level=info msg="StartContainer for \"642647ed8e1aefdd7f125045e8642bf056e698f91d0319a6dc060aecd499a1d7\"" Feb 13 19:34:52.263580 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:36072.service: Deactivated successfully. Feb 13 19:34:52.271583 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:34:52.272514 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:34:52.275189 systemd-logind[1555]: Removed session 12. Feb 13 19:34:52.318150 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 36080 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:34:52.320369 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:52.324640 systemd-logind[1555]: New session 13 of user core. Feb 13 19:34:52.329125 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:34:52.460522 containerd[1579]: time="2025-02-13T19:34:52.460473063Z" level=info msg="StartContainer for \"642647ed8e1aefdd7f125045e8642bf056e698f91d0319a6dc060aecd499a1d7\" returns successfully" Feb 13 19:34:52.492575 sshd[5666]: Connection closed by 10.0.0.1 port 36080 Feb 13 19:34:52.493019 sshd-session[5637]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:52.497669 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:36080.service: Deactivated successfully. Feb 13 19:34:52.500488 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:34:52.501252 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:34:52.502231 systemd-logind[1555]: Removed session 13. Feb 13 19:34:52.978405 kubelet[2796]: E0213 19:34:52.978127 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:52.978405 kubelet[2796]: E0213 19:34:52.978323 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:54.354417 containerd[1579]: time="2025-02-13T19:34:54.354365826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:54.355367 containerd[1579]: time="2025-02-13T19:34:54.355304409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:34:54.356640 containerd[1579]: time="2025-02-13T19:34:54.356611225Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:54.360253 containerd[1579]: time="2025-02-13T19:34:54.360150271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:54.360747 containerd[1579]: time="2025-02-13T19:34:54.360723398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.435840324s" Feb 13 19:34:54.360844 containerd[1579]: time="2025-02-13T19:34:54.360751430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:34:54.361753 containerd[1579]: time="2025-02-13T19:34:54.361723927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:34:54.368115 containerd[1579]: time="2025-02-13T19:34:54.368075656Z" level=info msg="CreateContainer within sandbox \"fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:34:54.561632 containerd[1579]: time="2025-02-13T19:34:54.561589542Z" level=info msg="CreateContainer within sandbox \"fd73063db61c3a14d55fc350073e48d9a58db764d09ddf339fb64941289809fd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7765f8ab7e5225cfaea5fcd47d990a026eb98a9714b61daeb2b2dd37f407f056\"" Feb 13 19:34:54.562094 containerd[1579]: time="2025-02-13T19:34:54.562074502Z" level=info msg="StartContainer for \"7765f8ab7e5225cfaea5fcd47d990a026eb98a9714b61daeb2b2dd37f407f056\"" Feb 13 19:34:54.636083 containerd[1579]: time="2025-02-13T19:34:54.636021245Z" level=info msg="StartContainer for \"7765f8ab7e5225cfaea5fcd47d990a026eb98a9714b61daeb2b2dd37f407f056\" returns successfully" Feb 13 19:34:54.999716 kubelet[2796]: I0213 19:34:54.999142 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d854b45db-vjxr2" podStartSLOduration=24.680356395 podStartE2EDuration="28.999126816s" podCreationTimestamp="2025-02-13 19:34:26 +0000 UTC" firstStartedPulling="2025-02-13 19:34:50.042788726 +0000 UTC m=+43.079014418" lastFinishedPulling="2025-02-13 19:34:54.361559137 +0000 UTC m=+47.397784839" observedRunningTime="2025-02-13 19:34:54.998332855 +0000 UTC m=+48.034558557" watchObservedRunningTime="2025-02-13 19:34:54.999126816 +0000 UTC m=+48.035352518" Feb 13 19:34:56.701400 containerd[1579]: time="2025-02-13T19:34:56.701355512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:56.702241 containerd[1579]: time="2025-02-13T19:34:56.702207471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:34:56.703524 containerd[1579]: time="2025-02-13T19:34:56.703499798Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:56.705931 containerd[1579]: time="2025-02-13T19:34:56.705886249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:56.706499 containerd[1579]: time="2025-02-13T19:34:56.706459075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.34469938s" Feb 13 19:34:56.706499 containerd[1579]: time="2025-02-13T19:34:56.706494762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:34:56.707345 containerd[1579]: time="2025-02-13T19:34:56.707323057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:34:56.708520 containerd[1579]: time="2025-02-13T19:34:56.708470913Z" level=info msg="CreateContainer within sandbox \"6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:34:56.722309 containerd[1579]: time="2025-02-13T19:34:56.722235650Z" level=info msg="CreateContainer within sandbox \"6d7c35f24b6b24ac4f4696432ef0b37708c3b2a4a3ee14652cedc9de22210218\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a6e627f1d473335e5a22365723a7e04eed9557261142302d9e32d2d14adb97b6\"" Feb 13 19:34:56.722947 containerd[1579]: time="2025-02-13T19:34:56.722905126Z" level=info msg="StartContainer for \"a6e627f1d473335e5a22365723a7e04eed9557261142302d9e32d2d14adb97b6\"" Feb 13 19:34:56.785949 containerd[1579]: time="2025-02-13T19:34:56.785886389Z" level=info msg="StartContainer for \"a6e627f1d473335e5a22365723a7e04eed9557261142302d9e32d2d14adb97b6\" returns successfully" Feb 13 19:34:57.058415 kubelet[2796]: I0213 19:34:57.058254 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87cc8ff8d-k264t" podStartSLOduration=24.433790806 podStartE2EDuration="31.058230576s" podCreationTimestamp="2025-02-13 19:34:26 +0000 UTC" firstStartedPulling="2025-02-13 19:34:50.08274648 +0000 UTC m=+43.118972182" lastFinishedPulling="2025-02-13 19:34:56.70718625 +0000 UTC m=+49.743411952" observedRunningTime="2025-02-13 19:34:57.055193875 +0000 UTC m=+50.091419577" watchObservedRunningTime="2025-02-13 19:34:57.058230576 +0000 UTC m=+50.094456278" Feb 13 19:34:57.280278 containerd[1579]: time="2025-02-13T19:34:57.279534273Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:57.281084 containerd[1579]: time="2025-02-13T19:34:57.280774802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:34:57.283743 containerd[1579]: time="2025-02-13T19:34:57.283700555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 576.348374ms" Feb 13 19:34:57.283743 containerd[1579]: time="2025-02-13T19:34:57.283737765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:34:57.284859 containerd[1579]: time="2025-02-13T19:34:57.284591869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:34:57.286492 containerd[1579]: time="2025-02-13T19:34:57.286294386Z" level=info msg="CreateContainer within sandbox \"ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:34:57.403177 containerd[1579]: time="2025-02-13T19:34:57.403136346Z" level=info msg="CreateContainer within sandbox \"ff9241c57c2f0d21f9c8892d5391897b252ebb0c9a83f4e3e92def69b8c3ea12\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"27d8d09967d7c9999bace68ceae39aba897b31f697c84260bc0b61eea8e9e771\"" Feb 13 19:34:57.403651 containerd[1579]: time="2025-02-13T19:34:57.403615115Z" level=info msg="StartContainer for \"27d8d09967d7c9999bace68ceae39aba897b31f697c84260bc0b61eea8e9e771\"" Feb 13 19:34:57.506872 containerd[1579]: time="2025-02-13T19:34:57.504725949Z" level=info msg="StartContainer for \"27d8d09967d7c9999bace68ceae39aba897b31f697c84260bc0b61eea8e9e771\" returns successfully" Feb 13 19:34:57.506593 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:36088.service - OpenSSH per-connection server daemon (10.0.0.1:36088). Feb 13 19:34:57.574609 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 36088 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:34:57.576219 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:57.580558 systemd-logind[1555]: New session 14 of user core. Feb 13 19:34:57.587165 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:34:57.719398 sshd[5847]: Connection closed by 10.0.0.1 port 36088 Feb 13 19:34:57.719983 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:57.725206 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:36088.service: Deactivated successfully. Feb 13 19:34:57.729742 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:34:57.730344 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:34:57.731502 systemd-logind[1555]: Removed session 14. Feb 13 19:34:58.002121 kubelet[2796]: I0213 19:34:58.001955 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:59.005356 kubelet[2796]: I0213 19:34:59.005328 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:59.251322 containerd[1579]: time="2025-02-13T19:34:59.251281575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:59.252415 containerd[1579]: time="2025-02-13T19:34:59.252377863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:34:59.258091 containerd[1579]: time="2025-02-13T19:34:59.258012150Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:59.263314 containerd[1579]: time="2025-02-13T19:34:59.263273778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:59.263891 containerd[1579]: time="2025-02-13T19:34:59.263868223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.979241359s" Feb 13 19:34:59.263933 containerd[1579]: time="2025-02-13T19:34:59.263895434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:34:59.265792 containerd[1579]: time="2025-02-13T19:34:59.265760655Z" level=info msg="CreateContainer within sandbox \"031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:34:59.610685 containerd[1579]: time="2025-02-13T19:34:59.610642644Z" level=info msg="CreateContainer within sandbox \"031f243008737ba58e30a12b0684301bee68c04d41da79bf823063f6b251879c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4ecea7ffb3c472ae278e5f4bebe94a215eb3501d8ffc9444efd179158dec31f6\"" Feb 13 19:34:59.611394 containerd[1579]: time="2025-02-13T19:34:59.611119268Z" level=info msg="StartContainer for \"4ecea7ffb3c472ae278e5f4bebe94a215eb3501d8ffc9444efd179158dec31f6\"" Feb 13 19:34:59.720112 containerd[1579]: time="2025-02-13T19:34:59.720058918Z" level=info msg="StartContainer for \"4ecea7ffb3c472ae278e5f4bebe94a215eb3501d8ffc9444efd179158dec31f6\" returns successfully" Feb 13 19:35:00.027694 kubelet[2796]: I0213 19:35:00.027404 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87cc8ff8d-j5sxt" podStartSLOduration=26.912923267 podStartE2EDuration="34.027379237s" podCreationTimestamp="2025-02-13 19:34:26 +0000 UTC" firstStartedPulling="2025-02-13 19:34:50.169933409 +0000 UTC m=+43.206159111" lastFinishedPulling="2025-02-13 19:34:57.284389369 +0000 UTC m=+50.320615081" observedRunningTime="2025-02-13 19:34:58.014021172 +0000 UTC m=+51.050246874" watchObservedRunningTime="2025-02-13 19:35:00.027379237 +0000 UTC m=+53.063604939" Feb 13 19:35:00.027694 kubelet[2796]: I0213 19:35:00.027505 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dwgr8" podStartSLOduration=24.791402388 podStartE2EDuration="34.027501877s" podCreationTimestamp="2025-02-13 19:34:26 +0000 UTC" firstStartedPulling="2025-02-13 19:34:50.028468418 +0000 UTC m=+43.064694120" lastFinishedPulling="2025-02-13 19:34:59.264567907 +0000 UTC m=+52.300793609" observedRunningTime="2025-02-13 19:35:00.027205231 +0000 UTC m=+53.063430923" watchObservedRunningTime="2025-02-13 19:35:00.027501877 +0000 UTC m=+53.063727579" Feb 13 19:35:00.132676 kubelet[2796]: I0213 19:35:00.132636 2796 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:35:00.132676 kubelet[2796]: I0213 19:35:00.132665 2796 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:35:02.376980 kubelet[2796]: I0213 19:35:02.376932 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:35:02.734075 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:58394.service - OpenSSH per-connection server daemon (10.0.0.1:58394). Feb 13 19:35:02.782068 sshd[5912]: Accepted publickey for core from 10.0.0.1 port 58394 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:02.783513 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:02.787466 systemd-logind[1555]: New session 15 of user core. Feb 13 19:35:02.796035 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:35:02.912878 sshd[5915]: Connection closed by 10.0.0.1 port 58394 Feb 13 19:35:02.913205 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:02.923041 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:58410.service - OpenSSH per-connection server daemon (10.0.0.1:58410). Feb 13 19:35:02.923882 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:58394.service: Deactivated successfully. Feb 13 19:35:02.927298 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:35:02.928223 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:35:02.929564 systemd-logind[1555]: Removed session 15. Feb 13 19:35:02.963257 sshd[5925]: Accepted publickey for core from 10.0.0.1 port 58410 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:02.964937 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:02.968914 systemd-logind[1555]: New session 16 of user core. Feb 13 19:35:02.985097 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:35:03.171129 sshd[5931]: Connection closed by 10.0.0.1 port 58410 Feb 13 19:35:03.171629 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:03.181013 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:58416.service - OpenSSH per-connection server daemon (10.0.0.1:58416). Feb 13 19:35:03.181487 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:58410.service: Deactivated successfully. Feb 13 19:35:03.185073 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:35:03.185439 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:35:03.186726 systemd-logind[1555]: Removed session 16. Feb 13 19:35:03.223557 sshd[5940]: Accepted publickey for core from 10.0.0.1 port 58416 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:03.224992 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:03.229007 systemd-logind[1555]: New session 17 of user core. Feb 13 19:35:03.241045 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:35:04.712424 kubelet[2796]: E0213 19:35:04.712326 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:04.928607 sshd[5946]: Connection closed by 10.0.0.1 port 58416 Feb 13 19:35:04.929630 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:04.942103 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:58432.service - OpenSSH per-connection server daemon (10.0.0.1:58432). Feb 13 19:35:04.942611 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:58416.service: Deactivated successfully. Feb 13 19:35:04.946936 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:35:04.948525 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:35:04.951203 systemd-logind[1555]: Removed session 17. Feb 13 19:35:04.986974 sshd[5982]: Accepted publickey for core from 10.0.0.1 port 58432 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:04.988594 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:04.992932 systemd-logind[1555]: New session 18 of user core. Feb 13 19:35:05.000058 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:35:05.225368 sshd[5990]: Connection closed by 10.0.0.1 port 58432 Feb 13 19:35:05.227126 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:05.230504 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:58432.service: Deactivated successfully. Feb 13 19:35:05.235485 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:35:05.241226 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:58446.service - OpenSSH per-connection server daemon (10.0.0.1:58446). Feb 13 19:35:05.241746 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:35:05.243409 systemd-logind[1555]: Removed session 18. Feb 13 19:35:05.282821 sshd[6000]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:05.284908 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:05.289567 systemd-logind[1555]: New session 19 of user core. Feb 13 19:35:05.298140 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:35:05.467420 sshd[6003]: Connection closed by 10.0.0.1 port 58446 Feb 13 19:35:05.467776 sshd-session[6000]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:05.471934 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:58446.service: Deactivated successfully. Feb 13 19:35:05.474096 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:35:05.474183 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:35:05.475218 systemd-logind[1555]: Removed session 19. Feb 13 19:35:07.037797 containerd[1579]: time="2025-02-13T19:35:07.037744962Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\"" Feb 13 19:35:07.038355 containerd[1579]: time="2025-02-13T19:35:07.037869466Z" level=info msg="TearDown network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" successfully" Feb 13 19:35:07.038355 containerd[1579]: time="2025-02-13T19:35:07.037880246Z" level=info msg="StopPodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" returns successfully" Feb 13 19:35:07.044446 containerd[1579]: time="2025-02-13T19:35:07.044403456Z" level=info msg="RemovePodSandbox for \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\"" Feb 13 19:35:07.048351 containerd[1579]: time="2025-02-13T19:35:07.048314003Z" level=info msg="Forcibly stopping sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\"" Feb 13 19:35:07.048531 containerd[1579]: time="2025-02-13T19:35:07.048468383Z" level=info msg="TearDown network for sandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" successfully" Feb 13 19:35:07.052528 containerd[1579]: time="2025-02-13T19:35:07.052500639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.052583 containerd[1579]: time="2025-02-13T19:35:07.052552947Z" level=info msg="RemovePodSandbox \"719a8a7a3d31570805cc5408f02a249686c0b36caabe2f6be8cc45345400d95b\" returns successfully" Feb 13 19:35:07.053041 containerd[1579]: time="2025-02-13T19:35:07.052995839Z" level=info msg="StopPodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\"" Feb 13 19:35:07.053109 containerd[1579]: time="2025-02-13T19:35:07.053091628Z" level=info msg="TearDown network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" successfully" Feb 13 19:35:07.053145 containerd[1579]: time="2025-02-13T19:35:07.053108490Z" level=info msg="StopPodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" returns successfully" Feb 13 19:35:07.053874 containerd[1579]: time="2025-02-13T19:35:07.053362767Z" level=info msg="RemovePodSandbox for \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\"" Feb 13 19:35:07.053874 containerd[1579]: time="2025-02-13T19:35:07.053389286Z" level=info msg="Forcibly stopping sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\"" Feb 13 19:35:07.053874 containerd[1579]: time="2025-02-13T19:35:07.053467493Z" level=info msg="TearDown network for sandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" successfully" Feb 13 19:35:07.060954 containerd[1579]: time="2025-02-13T19:35:07.060918353Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.061016 containerd[1579]: time="2025-02-13T19:35:07.060982564Z" level=info msg="RemovePodSandbox \"5d01039e684128ae512054b9e826141fbc0ad42c9ceb04d1bb71ebf9bf55ce56\" returns successfully" Feb 13 19:35:07.061534 containerd[1579]: time="2025-02-13T19:35:07.061341107Z" level=info msg="StopPodSandbox for \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\"" Feb 13 19:35:07.061534 containerd[1579]: time="2025-02-13T19:35:07.061457996Z" level=info msg="TearDown network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\" successfully" Feb 13 19:35:07.061534 containerd[1579]: time="2025-02-13T19:35:07.061468666Z" level=info msg="StopPodSandbox for \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\" returns successfully" Feb 13 19:35:07.061724 containerd[1579]: time="2025-02-13T19:35:07.061696724Z" level=info msg="RemovePodSandbox for \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\"" Feb 13 19:35:07.061760 containerd[1579]: time="2025-02-13T19:35:07.061730437Z" level=info msg="Forcibly stopping sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\"" Feb 13 19:35:07.061900 containerd[1579]: time="2025-02-13T19:35:07.061850623Z" level=info msg="TearDown network for sandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\" successfully" Feb 13 19:35:07.066790 containerd[1579]: time="2025-02-13T19:35:07.066761728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.066870 containerd[1579]: time="2025-02-13T19:35:07.066858479Z" level=info msg="RemovePodSandbox \"11942d9b6607d0cfca886fc07d7886f91efe478e80b625e07f69bc87cb2f67b6\" returns successfully" Feb 13 19:35:07.067167 containerd[1579]: time="2025-02-13T19:35:07.067134578Z" level=info msg="StopPodSandbox for \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\"" Feb 13 19:35:07.067250 containerd[1579]: time="2025-02-13T19:35:07.067229446Z" level=info msg="TearDown network for sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\" successfully" Feb 13 19:35:07.067332 containerd[1579]: time="2025-02-13T19:35:07.067240386Z" level=info msg="StopPodSandbox for \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\" returns successfully" Feb 13 19:35:07.067927 containerd[1579]: time="2025-02-13T19:35:07.067638453Z" level=info msg="RemovePodSandbox for \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\"" Feb 13 19:35:07.067927 containerd[1579]: time="2025-02-13T19:35:07.067670393Z" level=info msg="Forcibly stopping sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\"" Feb 13 19:35:07.067927 containerd[1579]: time="2025-02-13T19:35:07.067753970Z" level=info msg="TearDown network for sandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\" successfully" Feb 13 19:35:07.078105 containerd[1579]: time="2025-02-13T19:35:07.078005565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.078105 containerd[1579]: time="2025-02-13T19:35:07.078059837Z" level=info msg="RemovePodSandbox \"e4a645b4a9cf4536f94b9398959e737fc3c26d5154a25f2cb6198527f68658b8\" returns successfully" Feb 13 19:35:07.078422 containerd[1579]: time="2025-02-13T19:35:07.078393563Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" Feb 13 19:35:07.078494 containerd[1579]: time="2025-02-13T19:35:07.078480296Z" level=info msg="TearDown network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" successfully" Feb 13 19:35:07.078518 containerd[1579]: time="2025-02-13T19:35:07.078493541Z" level=info msg="StopPodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" returns successfully" Feb 13 19:35:07.078725 containerd[1579]: time="2025-02-13T19:35:07.078700890Z" level=info msg="RemovePodSandbox for \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" Feb 13 19:35:07.078725 containerd[1579]: time="2025-02-13T19:35:07.078723171Z" level=info msg="Forcibly stopping sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\"" Feb 13 19:35:07.078837 containerd[1579]: time="2025-02-13T19:35:07.078789276Z" level=info msg="TearDown network for sandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" successfully" Feb 13 19:35:07.082617 containerd[1579]: time="2025-02-13T19:35:07.082571152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.082694 containerd[1579]: time="2025-02-13T19:35:07.082628079Z" level=info msg="RemovePodSandbox \"c7869de4baa59ef3c1b1295a257f9b6185ce2de80c275658ccb3ebba8fbe1a4a\" returns successfully" Feb 13 19:35:07.082961 containerd[1579]: time="2025-02-13T19:35:07.082937239Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\"" Feb 13 19:35:07.083084 containerd[1579]: time="2025-02-13T19:35:07.083021507Z" level=info msg="TearDown network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" successfully" Feb 13 19:35:07.083084 containerd[1579]: time="2025-02-13T19:35:07.083075789Z" level=info msg="StopPodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" returns successfully" Feb 13 19:35:07.083338 containerd[1579]: time="2025-02-13T19:35:07.083307785Z" level=info msg="RemovePodSandbox for \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\"" Feb 13 19:35:07.083338 containerd[1579]: time="2025-02-13T19:35:07.083332942Z" level=info msg="Forcibly stopping sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\"" Feb 13 19:35:07.083433 containerd[1579]: time="2025-02-13T19:35:07.083399106Z" level=info msg="TearDown network for sandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" successfully" Feb 13 19:35:07.087270 containerd[1579]: time="2025-02-13T19:35:07.087236566Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.087309 containerd[1579]: time="2025-02-13T19:35:07.087287291Z" level=info msg="RemovePodSandbox \"ba7c6bd89f73615f2b14a79f87085415dfd94081f4d03f9be90d62d273499c9f\" returns successfully" Feb 13 19:35:07.087570 containerd[1579]: time="2025-02-13T19:35:07.087536939Z" level=info msg="StopPodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\"" Feb 13 19:35:07.087666 containerd[1579]: time="2025-02-13T19:35:07.087640324Z" level=info msg="TearDown network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" successfully" Feb 13 19:35:07.087666 containerd[1579]: time="2025-02-13T19:35:07.087658117Z" level=info msg="StopPodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" returns successfully" Feb 13 19:35:07.087912 containerd[1579]: time="2025-02-13T19:35:07.087880935Z" level=info msg="RemovePodSandbox for \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\"" Feb 13 19:35:07.087912 containerd[1579]: time="2025-02-13T19:35:07.087907916Z" level=info msg="Forcibly stopping sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\"" Feb 13 19:35:07.088065 containerd[1579]: time="2025-02-13T19:35:07.088020958Z" level=info msg="TearDown network for sandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" successfully" Feb 13 19:35:07.092088 containerd[1579]: time="2025-02-13T19:35:07.092058103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.092218 containerd[1579]: time="2025-02-13T19:35:07.092182016Z" level=info msg="RemovePodSandbox \"9e210eda25420527b41e49a224d59079afb6151226282c897cb9977259f71d12\" returns successfully" Feb 13 19:35:07.092472 containerd[1579]: time="2025-02-13T19:35:07.092442354Z" level=info msg="StopPodSandbox for \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\"" Feb 13 19:35:07.092556 containerd[1579]: time="2025-02-13T19:35:07.092528535Z" level=info msg="TearDown network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\" successfully" Feb 13 19:35:07.092556 containerd[1579]: time="2025-02-13T19:35:07.092549515Z" level=info msg="StopPodSandbox for \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\" returns successfully" Feb 13 19:35:07.092769 containerd[1579]: time="2025-02-13T19:35:07.092749941Z" level=info msg="RemovePodSandbox for \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\"" Feb 13 19:35:07.092810 containerd[1579]: time="2025-02-13T19:35:07.092771331Z" level=info msg="Forcibly stopping sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\"" Feb 13 19:35:07.092909 containerd[1579]: time="2025-02-13T19:35:07.092855149Z" level=info msg="TearDown network for sandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\" successfully" Feb 13 19:35:07.096858 containerd[1579]: time="2025-02-13T19:35:07.096818255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.096935 containerd[1579]: time="2025-02-13T19:35:07.096874540Z" level=info msg="RemovePodSandbox \"b0358aca3f6969df2cf2d568824ef5e2098453b942f77770e2cd2abb2c7be7ec\" returns successfully" Feb 13 19:35:07.097310 containerd[1579]: time="2025-02-13T19:35:07.097153443Z" level=info msg="StopPodSandbox for \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\"" Feb 13 19:35:07.097310 containerd[1579]: time="2025-02-13T19:35:07.097246469Z" level=info msg="TearDown network for sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\" successfully" Feb 13 19:35:07.097310 containerd[1579]: time="2025-02-13T19:35:07.097255816Z" level=info msg="StopPodSandbox for \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\" returns successfully" Feb 13 19:35:07.097513 containerd[1579]: time="2025-02-13T19:35:07.097480868Z" level=info msg="RemovePodSandbox for \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\"" Feb 13 19:35:07.097513 containerd[1579]: time="2025-02-13T19:35:07.097512247Z" level=info msg="Forcibly stopping sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\"" Feb 13 19:35:07.097684 containerd[1579]: time="2025-02-13T19:35:07.097597296Z" level=info msg="TearDown network for sandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\" successfully" Feb 13 19:35:07.102159 containerd[1579]: time="2025-02-13T19:35:07.102087952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.102159 containerd[1579]: time="2025-02-13T19:35:07.102155599Z" level=info msg="RemovePodSandbox \"38b556f7a38a885665ae41b2ebc9903edf1b499410a206a896dc69a2f34bce41\" returns successfully" Feb 13 19:35:07.102623 containerd[1579]: time="2025-02-13T19:35:07.102595725Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" Feb 13 19:35:07.102753 containerd[1579]: time="2025-02-13T19:35:07.102696344Z" level=info msg="TearDown network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" successfully" Feb 13 19:35:07.102753 containerd[1579]: time="2025-02-13T19:35:07.102751447Z" level=info msg="StopPodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" returns successfully" Feb 13 19:35:07.103030 containerd[1579]: time="2025-02-13T19:35:07.103004422Z" level=info msg="RemovePodSandbox for \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" Feb 13 19:35:07.103030 containerd[1579]: time="2025-02-13T19:35:07.103024990Z" level=info msg="Forcibly stopping sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\"" Feb 13 19:35:07.103194 containerd[1579]: time="2025-02-13T19:35:07.103087488Z" level=info msg="TearDown network for sandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" successfully" Feb 13 19:35:07.107884 containerd[1579]: time="2025-02-13T19:35:07.107688151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.107884 containerd[1579]: time="2025-02-13T19:35:07.107732744Z" level=info msg="RemovePodSandbox \"e2a32d9ca117aaa4f498c157b4c387a7edb47dd0f28d754a15d310be476b625e\" returns successfully" Feb 13 19:35:07.108074 containerd[1579]: time="2025-02-13T19:35:07.108039520Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\"" Feb 13 19:35:07.108172 containerd[1579]: time="2025-02-13T19:35:07.108153163Z" level=info msg="TearDown network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" successfully" Feb 13 19:35:07.108239 containerd[1579]: time="2025-02-13T19:35:07.108176106Z" level=info msg="StopPodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" returns successfully" Feb 13 19:35:07.108416 containerd[1579]: time="2025-02-13T19:35:07.108387222Z" level=info msg="RemovePodSandbox for \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\"" Feb 13 19:35:07.108416 containerd[1579]: time="2025-02-13T19:35:07.108408112Z" level=info msg="Forcibly stopping sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\"" Feb 13 19:35:07.108510 containerd[1579]: time="2025-02-13T19:35:07.108468084Z" level=info msg="TearDown network for sandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" successfully" Feb 13 19:35:07.112331 containerd[1579]: time="2025-02-13T19:35:07.112295596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.114651 containerd[1579]: time="2025-02-13T19:35:07.112641735Z" level=info msg="RemovePodSandbox \"a7f364ea1dbd95846558542a3da76f59c11842faf5e30a7f23fcbc8a20b50583\" returns successfully" Feb 13 19:35:07.115261 containerd[1579]: time="2025-02-13T19:35:07.115236554Z" level=info msg="StopPodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\"" Feb 13 19:35:07.115457 containerd[1579]: time="2025-02-13T19:35:07.115439795Z" level=info msg="TearDown network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" successfully" Feb 13 19:35:07.115533 containerd[1579]: time="2025-02-13T19:35:07.115512521Z" level=info msg="StopPodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" returns successfully" Feb 13 19:35:07.115975 containerd[1579]: time="2025-02-13T19:35:07.115951215Z" level=info msg="RemovePodSandbox for \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\"" Feb 13 19:35:07.116020 containerd[1579]: time="2025-02-13T19:35:07.115978256Z" level=info msg="Forcibly stopping sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\"" Feb 13 19:35:07.116210 containerd[1579]: time="2025-02-13T19:35:07.116042296Z" level=info msg="TearDown network for sandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" successfully" Feb 13 19:35:07.119908 containerd[1579]: time="2025-02-13T19:35:07.119860790Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.119908 containerd[1579]: time="2025-02-13T19:35:07.119904512Z" level=info msg="RemovePodSandbox \"552971ef6392d6fa08e7fc079c9ef3b33ecd745525c9990a91c0837544cd15e8\" returns successfully" Feb 13 19:35:07.120348 containerd[1579]: time="2025-02-13T19:35:07.120181753Z" level=info msg="StopPodSandbox for \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\"" Feb 13 19:35:07.120348 containerd[1579]: time="2025-02-13T19:35:07.120266232Z" level=info msg="TearDown network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\" successfully" Feb 13 19:35:07.120348 containerd[1579]: time="2025-02-13T19:35:07.120301177Z" level=info msg="StopPodSandbox for \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\" returns successfully" Feb 13 19:35:07.120563 containerd[1579]: time="2025-02-13T19:35:07.120541518Z" level=info msg="RemovePodSandbox for \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\"" Feb 13 19:35:07.120605 containerd[1579]: time="2025-02-13T19:35:07.120567236Z" level=info msg="Forcibly stopping sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\"" Feb 13 19:35:07.120671 containerd[1579]: time="2025-02-13T19:35:07.120641004Z" level=info msg="TearDown network for sandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\" successfully" Feb 13 19:35:07.124456 containerd[1579]: time="2025-02-13T19:35:07.124427500Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.124494 containerd[1579]: time="2025-02-13T19:35:07.124461584Z" level=info msg="RemovePodSandbox \"04d043ec31de05b4f17d45f3121e592c5386874a6d6d74c88c2109d4cd0b875d\" returns successfully" Feb 13 19:35:07.124852 containerd[1579]: time="2025-02-13T19:35:07.124702185Z" level=info msg="StopPodSandbox for \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\"" Feb 13 19:35:07.124852 containerd[1579]: time="2025-02-13T19:35:07.124772567Z" level=info msg="TearDown network for sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\" successfully" Feb 13 19:35:07.124852 containerd[1579]: time="2025-02-13T19:35:07.124781654Z" level=info msg="StopPodSandbox for \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\" returns successfully" Feb 13 19:35:07.125762 containerd[1579]: time="2025-02-13T19:35:07.125056319Z" level=info msg="RemovePodSandbox for \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\"" Feb 13 19:35:07.125762 containerd[1579]: time="2025-02-13T19:35:07.125076427Z" level=info msg="Forcibly stopping sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\"" Feb 13 19:35:07.125762 containerd[1579]: time="2025-02-13T19:35:07.125148181Z" level=info msg="TearDown network for sandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\" successfully" Feb 13 19:35:07.128659 containerd[1579]: time="2025-02-13T19:35:07.128624304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.128700 containerd[1579]: time="2025-02-13T19:35:07.128666864Z" level=info msg="RemovePodSandbox \"da78b8981250001048c10d2c12779c43104a36e587514ff35a971f2bb1bbea7e\" returns successfully" Feb 13 19:35:07.128943 containerd[1579]: time="2025-02-13T19:35:07.128915892Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" Feb 13 19:35:07.129022 containerd[1579]: time="2025-02-13T19:35:07.128994980Z" level=info msg="TearDown network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" successfully" Feb 13 19:35:07.129022 containerd[1579]: time="2025-02-13T19:35:07.129014607Z" level=info msg="StopPodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" returns successfully" Feb 13 19:35:07.129219 containerd[1579]: time="2025-02-13T19:35:07.129183093Z" level=info msg="RemovePodSandbox for \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" Feb 13 19:35:07.129219 containerd[1579]: time="2025-02-13T19:35:07.129212237Z" level=info msg="Forcibly stopping sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\"" Feb 13 19:35:07.129321 containerd[1579]: time="2025-02-13T19:35:07.129292227Z" level=info msg="TearDown network for sandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" successfully" Feb 13 19:35:07.132813 containerd[1579]: time="2025-02-13T19:35:07.132764793Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.132813 containerd[1579]: time="2025-02-13T19:35:07.132794860Z" level=info msg="RemovePodSandbox \"58af316c8e3e834d9f73dd9b20a52a451dbb386b46455648d8d395cf6335c6c1\" returns successfully" Feb 13 19:35:07.133055 containerd[1579]: time="2025-02-13T19:35:07.133032776Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\"" Feb 13 19:35:07.133150 containerd[1579]: time="2025-02-13T19:35:07.133120380Z" level=info msg="TearDown network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" successfully" Feb 13 19:35:07.133150 containerd[1579]: time="2025-02-13T19:35:07.133139897Z" level=info msg="StopPodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" returns successfully" Feb 13 19:35:07.133392 containerd[1579]: time="2025-02-13T19:35:07.133361493Z" level=info msg="RemovePodSandbox for \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\"" Feb 13 19:35:07.133392 containerd[1579]: time="2025-02-13T19:35:07.133388494Z" level=info msg="Forcibly stopping sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\"" Feb 13 19:35:07.133500 containerd[1579]: time="2025-02-13T19:35:07.133466240Z" level=info msg="TearDown network for sandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" successfully" Feb 13 19:35:07.137190 containerd[1579]: time="2025-02-13T19:35:07.137165190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.137246 containerd[1579]: time="2025-02-13T19:35:07.137197591Z" level=info msg="RemovePodSandbox \"2ab0d37565fb2139c47178f36066cae0e9a1f8eb836274f0d8d2015a3d0e8210\" returns successfully" Feb 13 19:35:07.137436 containerd[1579]: time="2025-02-13T19:35:07.137408927Z" level=info msg="StopPodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\"" Feb 13 19:35:07.137512 containerd[1579]: time="2025-02-13T19:35:07.137498625Z" level=info msg="TearDown network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" successfully" Feb 13 19:35:07.137542 containerd[1579]: time="2025-02-13T19:35:07.137512642Z" level=info msg="StopPodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" returns successfully" Feb 13 19:35:07.137777 containerd[1579]: time="2025-02-13T19:35:07.137757672Z" level=info msg="RemovePodSandbox for \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\"" Feb 13 19:35:07.137836 containerd[1579]: time="2025-02-13T19:35:07.137777669Z" level=info msg="Forcibly stopping sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\"" Feb 13 19:35:07.137877 containerd[1579]: time="2025-02-13T19:35:07.137851057Z" level=info msg="TearDown network for sandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" successfully" Feb 13 19:35:07.141177 containerd[1579]: time="2025-02-13T19:35:07.141157801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.141260 containerd[1579]: time="2025-02-13T19:35:07.141185974Z" level=info msg="RemovePodSandbox \"a097b1a5c049de3a385b640f563bb7c8a8ee418ddf816ef27c81c1875066d236\" returns successfully" Feb 13 19:35:07.141473 containerd[1579]: time="2025-02-13T19:35:07.141452184Z" level=info msg="StopPodSandbox for \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\"" Feb 13 19:35:07.141571 containerd[1579]: time="2025-02-13T19:35:07.141549126Z" level=info msg="TearDown network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\" successfully" Feb 13 19:35:07.141571 containerd[1579]: time="2025-02-13T19:35:07.141566368Z" level=info msg="StopPodSandbox for \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\" returns successfully" Feb 13 19:35:07.141864 containerd[1579]: time="2025-02-13T19:35:07.141845211Z" level=info msg="RemovePodSandbox for \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\"" Feb 13 19:35:07.141864 containerd[1579]: time="2025-02-13T19:35:07.141864708Z" level=info msg="Forcibly stopping sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\"" Feb 13 19:35:07.141964 containerd[1579]: time="2025-02-13T19:35:07.141938947Z" level=info msg="TearDown network for sandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\" successfully" Feb 13 19:35:07.145551 containerd[1579]: time="2025-02-13T19:35:07.145528352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.145639 containerd[1579]: time="2025-02-13T19:35:07.145562977Z" level=info msg="RemovePodSandbox \"45e84c6b7975d9ff0119f01f2e030422a88fc4058d1483fa8fe4362faf4f026e\" returns successfully" Feb 13 19:35:07.145874 containerd[1579]: time="2025-02-13T19:35:07.145856939Z" level=info msg="StopPodSandbox for \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\"" Feb 13 19:35:07.145976 containerd[1579]: time="2025-02-13T19:35:07.145942459Z" level=info msg="TearDown network for sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\" successfully" Feb 13 19:35:07.145976 containerd[1579]: time="2025-02-13T19:35:07.145974039Z" level=info msg="StopPodSandbox for \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\" returns successfully" Feb 13 19:35:07.146186 containerd[1579]: time="2025-02-13T19:35:07.146155198Z" level=info msg="RemovePodSandbox for \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\"" Feb 13 19:35:07.146186 containerd[1579]: time="2025-02-13T19:35:07.146180396Z" level=info msg="Forcibly stopping sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\"" Feb 13 19:35:07.146304 containerd[1579]: time="2025-02-13T19:35:07.146270114Z" level=info msg="TearDown network for sandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\" successfully" Feb 13 19:35:07.150083 containerd[1579]: time="2025-02-13T19:35:07.150058572Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.150147 containerd[1579]: time="2025-02-13T19:35:07.150098798Z" level=info msg="RemovePodSandbox \"c440e7c5845593262c6482f40b757ca9409b6ca4de804f3243906a2ea6d4d117\" returns successfully" Feb 13 19:35:07.150420 containerd[1579]: time="2025-02-13T19:35:07.150384915Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" Feb 13 19:35:07.150499 containerd[1579]: time="2025-02-13T19:35:07.150480624Z" level=info msg="TearDown network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" successfully" Feb 13 19:35:07.150532 containerd[1579]: time="2025-02-13T19:35:07.150496564Z" level=info msg="StopPodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" returns successfully" Feb 13 19:35:07.150847 containerd[1579]: time="2025-02-13T19:35:07.150817576Z" level=info msg="RemovePodSandbox for \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" Feb 13 19:35:07.150847 containerd[1579]: time="2025-02-13T19:35:07.150846541Z" level=info msg="Forcibly stopping sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\"" Feb 13 19:35:07.150956 containerd[1579]: time="2025-02-13T19:35:07.150925980Z" level=info msg="TearDown network for sandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" successfully" Feb 13 19:35:07.154693 containerd[1579]: time="2025-02-13T19:35:07.154661139Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.154742 containerd[1579]: time="2025-02-13T19:35:07.154707746Z" level=info msg="RemovePodSandbox \"fba50ddaf9481236d26c38cde201e1f9c490485db2a98f5a057cc328eadecf7f\" returns successfully" Feb 13 19:35:07.154965 containerd[1579]: time="2025-02-13T19:35:07.154938439Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\"" Feb 13 19:35:07.155030 containerd[1579]: time="2025-02-13T19:35:07.155018890Z" level=info msg="TearDown network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" successfully" Feb 13 19:35:07.155030 containerd[1579]: time="2025-02-13T19:35:07.155027917Z" level=info msg="StopPodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" returns successfully" Feb 13 19:35:07.155316 containerd[1579]: time="2025-02-13T19:35:07.155287433Z" level=info msg="RemovePodSandbox for \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\"" Feb 13 19:35:07.155363 containerd[1579]: time="2025-02-13T19:35:07.155319113Z" level=info msg="Forcibly stopping sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\"" Feb 13 19:35:07.155422 containerd[1579]: time="2025-02-13T19:35:07.155391780Z" level=info msg="TearDown network for sandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" successfully" Feb 13 19:35:07.159040 containerd[1579]: time="2025-02-13T19:35:07.159013204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.159099 containerd[1579]: time="2025-02-13T19:35:07.159053060Z" level=info msg="RemovePodSandbox \"8e97fc671a4b0559997a21712e0d73cdcaaab1bf27428784d430bf092e7489f2\" returns successfully" Feb 13 19:35:07.159337 containerd[1579]: time="2025-02-13T19:35:07.159312847Z" level=info msg="StopPodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\"" Feb 13 19:35:07.159447 containerd[1579]: time="2025-02-13T19:35:07.159416862Z" level=info msg="TearDown network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" successfully" Feb 13 19:35:07.159447 containerd[1579]: time="2025-02-13T19:35:07.159435667Z" level=info msg="StopPodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" returns successfully" Feb 13 19:35:07.159696 containerd[1579]: time="2025-02-13T19:35:07.159663925Z" level=info msg="RemovePodSandbox for \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\"" Feb 13 19:35:07.159696 containerd[1579]: time="2025-02-13T19:35:07.159692829Z" level=info msg="Forcibly stopping sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\"" Feb 13 19:35:07.159825 containerd[1579]: time="2025-02-13T19:35:07.159769755Z" level=info msg="TearDown network for sandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" successfully" Feb 13 19:35:07.163398 containerd[1579]: time="2025-02-13T19:35:07.163361724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.163441 containerd[1579]: time="2025-02-13T19:35:07.163402400Z" level=info msg="RemovePodSandbox \"0097bb64aa6dc129c2e1c2a39c28f1148f3ed457eadbe1ba9d6328874f88af20\" returns successfully" Feb 13 19:35:07.165932 containerd[1579]: time="2025-02-13T19:35:07.165904595Z" level=info msg="StopPodSandbox for \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\"" Feb 13 19:35:07.166058 containerd[1579]: time="2025-02-13T19:35:07.165997920Z" level=info msg="TearDown network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\" successfully" Feb 13 19:35:07.166058 containerd[1579]: time="2025-02-13T19:35:07.166011907Z" level=info msg="StopPodSandbox for \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\" returns successfully" Feb 13 19:35:07.166429 containerd[1579]: time="2025-02-13T19:35:07.166303944Z" level=info msg="RemovePodSandbox for \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\"" Feb 13 19:35:07.166429 containerd[1579]: time="2025-02-13T19:35:07.166323371Z" level=info msg="Forcibly stopping sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\"" Feb 13 19:35:07.166429 containerd[1579]: time="2025-02-13T19:35:07.166408912Z" level=info msg="TearDown network for sandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\" successfully" Feb 13 19:35:07.170464 containerd[1579]: time="2025-02-13T19:35:07.170430538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.170510 containerd[1579]: time="2025-02-13T19:35:07.170475642Z" level=info msg="RemovePodSandbox \"903c8b8c534d80e5be81ae8b1d26cde99c90b88868c86bcd1c47872bd3894cc7\" returns successfully" Feb 13 19:35:07.170722 containerd[1579]: time="2025-02-13T19:35:07.170698270Z" level=info msg="StopPodSandbox for \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\"" Feb 13 19:35:07.170841 containerd[1579]: time="2025-02-13T19:35:07.170796444Z" level=info msg="TearDown network for sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\" successfully" Feb 13 19:35:07.170885 containerd[1579]: time="2025-02-13T19:35:07.170837842Z" level=info msg="StopPodSandbox for \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\" returns successfully" Feb 13 19:35:07.171101 containerd[1579]: time="2025-02-13T19:35:07.171072813Z" level=info msg="RemovePodSandbox for \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\"" Feb 13 19:35:07.171101 containerd[1579]: time="2025-02-13T19:35:07.171099533Z" level=info msg="Forcibly stopping sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\"" Feb 13 19:35:07.171229 containerd[1579]: time="2025-02-13T19:35:07.171176066Z" level=info msg="TearDown network for sandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\" successfully" Feb 13 19:35:07.175008 containerd[1579]: time="2025-02-13T19:35:07.174971317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.175069 containerd[1579]: time="2025-02-13T19:35:07.175013436Z" level=info msg="RemovePodSandbox \"0d626de3fbb65fd86b54cbe808422423df3a6c1b2ba9849398b5db1c4f79b837\" returns successfully" Feb 13 19:35:07.175350 containerd[1579]: time="2025-02-13T19:35:07.175328608Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" Feb 13 19:35:07.175420 containerd[1579]: time="2025-02-13T19:35:07.175404140Z" level=info msg="TearDown network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" successfully" Feb 13 19:35:07.175420 containerd[1579]: time="2025-02-13T19:35:07.175417184Z" level=info msg="StopPodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" returns successfully" Feb 13 19:35:07.175733 containerd[1579]: time="2025-02-13T19:35:07.175699824Z" level=info msg="RemovePodSandbox for \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" Feb 13 19:35:07.175733 containerd[1579]: time="2025-02-13T19:35:07.175730662Z" level=info msg="Forcibly stopping sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\"" Feb 13 19:35:07.175883 containerd[1579]: time="2025-02-13T19:35:07.175822294Z" level=info msg="TearDown network for sandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" successfully" Feb 13 19:35:07.179372 containerd[1579]: time="2025-02-13T19:35:07.179335436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.179432 containerd[1579]: time="2025-02-13T19:35:07.179391421Z" level=info msg="RemovePodSandbox \"b0817781e467e0619a6a1d5cf3a33254216b324226010afe120998027725c6a4\" returns successfully" Feb 13 19:35:07.179872 containerd[1579]: time="2025-02-13T19:35:07.179670245Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\"" Feb 13 19:35:07.179872 containerd[1579]: time="2025-02-13T19:35:07.179776063Z" level=info msg="TearDown network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" successfully" Feb 13 19:35:07.179872 containerd[1579]: time="2025-02-13T19:35:07.179790581Z" level=info msg="StopPodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" returns successfully" Feb 13 19:35:07.180070 containerd[1579]: time="2025-02-13T19:35:07.180038866Z" level=info msg="RemovePodSandbox for \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\"" Feb 13 19:35:07.180070 containerd[1579]: time="2025-02-13T19:35:07.180064344Z" level=info msg="Forcibly stopping sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\"" Feb 13 19:35:07.180190 containerd[1579]: time="2025-02-13T19:35:07.180145055Z" level=info msg="TearDown network for sandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" successfully" Feb 13 19:35:07.183916 containerd[1579]: time="2025-02-13T19:35:07.183883269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.183973 containerd[1579]: time="2025-02-13T19:35:07.183920339Z" level=info msg="RemovePodSandbox \"e3937e9776e35e08c4681d46c97e0b6580b7c84e1d27ab69c649f02deb1c444b\" returns successfully" Feb 13 19:35:07.184231 containerd[1579]: time="2025-02-13T19:35:07.184197570Z" level=info msg="StopPodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\"" Feb 13 19:35:07.184322 containerd[1579]: time="2025-02-13T19:35:07.184299050Z" level=info msg="TearDown network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" successfully" Feb 13 19:35:07.184322 containerd[1579]: time="2025-02-13T19:35:07.184315962Z" level=info msg="StopPodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" returns successfully" Feb 13 19:35:07.184599 containerd[1579]: time="2025-02-13T19:35:07.184573565Z" level=info msg="RemovePodSandbox for \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\"" Feb 13 19:35:07.184652 containerd[1579]: time="2025-02-13T19:35:07.184601156Z" level=info msg="Forcibly stopping sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\"" Feb 13 19:35:07.184716 containerd[1579]: time="2025-02-13T19:35:07.184679984Z" level=info msg="TearDown network for sandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" successfully" Feb 13 19:35:07.188476 containerd[1579]: time="2025-02-13T19:35:07.188437384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.188545 containerd[1579]: time="2025-02-13T19:35:07.188478152Z" level=info msg="RemovePodSandbox \"f0d9ecfe1f242fd7c5eed42b3c189fb048b7331f0738fe2c03e3601376c66dbd\" returns successfully" Feb 13 19:35:07.188717 containerd[1579]: time="2025-02-13T19:35:07.188687965Z" level=info msg="StopPodSandbox for \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\"" Feb 13 19:35:07.188790 containerd[1579]: time="2025-02-13T19:35:07.188773485Z" level=info msg="TearDown network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\" successfully" Feb 13 19:35:07.188790 containerd[1579]: time="2025-02-13T19:35:07.188786871Z" level=info msg="StopPodSandbox for \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\" returns successfully" Feb 13 19:35:07.189062 containerd[1579]: time="2025-02-13T19:35:07.189019296Z" level=info msg="RemovePodSandbox for \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\"" Feb 13 19:35:07.189062 containerd[1579]: time="2025-02-13T19:35:07.189051176Z" level=info msg="Forcibly stopping sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\"" Feb 13 19:35:07.189140 containerd[1579]: time="2025-02-13T19:35:07.189114635Z" level=info msg="TearDown network for sandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\" successfully" Feb 13 19:35:07.192658 containerd[1579]: time="2025-02-13T19:35:07.192612699Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.192658 containerd[1579]: time="2025-02-13T19:35:07.192655028Z" level=info msg="RemovePodSandbox \"d202a1c1882c3309214fb2fc51f17a3e5487dff14bc27eec983250c6bb652f09\" returns successfully" Feb 13 19:35:07.192918 containerd[1579]: time="2025-02-13T19:35:07.192884780Z" level=info msg="StopPodSandbox for \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\"" Feb 13 19:35:07.193011 containerd[1579]: time="2025-02-13T19:35:07.192989446Z" level=info msg="TearDown network for sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\" successfully" Feb 13 19:35:07.193011 containerd[1579]: time="2025-02-13T19:35:07.193007480Z" level=info msg="StopPodSandbox for \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\" returns successfully" Feb 13 19:35:07.193302 containerd[1579]: time="2025-02-13T19:35:07.193271846Z" level=info msg="RemovePodSandbox for \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\"" Feb 13 19:35:07.193302 containerd[1579]: time="2025-02-13T19:35:07.193297895Z" level=info msg="Forcibly stopping sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\"" Feb 13 19:35:07.193407 containerd[1579]: time="2025-02-13T19:35:07.193368657Z" level=info msg="TearDown network for sandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\" successfully" Feb 13 19:35:07.196910 containerd[1579]: time="2025-02-13T19:35:07.196883533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.196984 containerd[1579]: time="2025-02-13T19:35:07.196921584Z" level=info msg="RemovePodSandbox \"4c3ac0adf79604058030beed00f728ceca616971f21c6a12bd37c35c283d239c\" returns successfully" Feb 13 19:35:10.479010 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:39572.service - OpenSSH per-connection server daemon (10.0.0.1:39572). Feb 13 19:35:10.518240 sshd[6041]: Accepted publickey for core from 10.0.0.1 port 39572 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:10.519702 sshd-session[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:10.523665 systemd-logind[1555]: New session 20 of user core. Feb 13 19:35:10.533165 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:35:10.640342 sshd[6044]: Connection closed by 10.0.0.1 port 39572 Feb 13 19:35:10.640684 sshd-session[6041]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:10.644618 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:39572.service: Deactivated successfully. Feb 13 19:35:10.647032 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:35:10.647067 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:35:10.648233 systemd-logind[1555]: Removed session 20. Feb 13 19:35:15.657011 systemd[1]: Started sshd@20-10.0.0.18:22-10.0.0.1:39582.service - OpenSSH per-connection server daemon (10.0.0.1:39582). Feb 13 19:35:15.698594 sshd[6082]: Accepted publickey for core from 10.0.0.1 port 39582 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:15.700373 sshd-session[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:15.704254 systemd-logind[1555]: New session 21 of user core. Feb 13 19:35:15.717054 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:35:15.833584 sshd[6085]: Connection closed by 10.0.0.1 port 39582 Feb 13 19:35:15.833949 sshd-session[6082]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:15.838322 systemd[1]: sshd@20-10.0.0.18:22-10.0.0.1:39582.service: Deactivated successfully. Feb 13 19:35:15.840758 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:35:15.840836 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:35:15.841863 systemd-logind[1555]: Removed session 21. Feb 13 19:35:20.056974 kubelet[2796]: E0213 19:35:20.056939 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:20.849016 systemd[1]: Started sshd@21-10.0.0.18:22-10.0.0.1:54930.service - OpenSSH per-connection server daemon (10.0.0.1:54930). Feb 13 19:35:20.888641 sshd[6100]: Accepted publickey for core from 10.0.0.1 port 54930 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:20.890355 sshd-session[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:20.894727 systemd-logind[1555]: New session 22 of user core. Feb 13 19:35:20.902214 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:35:21.019309 sshd[6103]: Connection closed by 10.0.0.1 port 54930 Feb 13 19:35:21.019647 sshd-session[6100]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:21.023237 systemd[1]: sshd@21-10.0.0.18:22-10.0.0.1:54930.service: Deactivated successfully. Feb 13 19:35:21.025896 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:35:21.026594 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:35:21.027448 systemd-logind[1555]: Removed session 22. Feb 13 19:35:21.056472 kubelet[2796]: E0213 19:35:21.056400 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:23.796302 kubelet[2796]: I0213 19:35:23.796250 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:35:26.029153 systemd[1]: Started sshd@22-10.0.0.18:22-10.0.0.1:54944.service - OpenSSH per-connection server daemon (10.0.0.1:54944). Feb 13 19:35:26.056431 kubelet[2796]: E0213 19:35:26.056404 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:26.067904 sshd[6119]: Accepted publickey for core from 10.0.0.1 port 54944 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:35:26.069619 sshd-session[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:26.073815 systemd-logind[1555]: New session 23 of user core. Feb 13 19:35:26.083085 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:35:26.185477 sshd[6122]: Connection closed by 10.0.0.1 port 54944 Feb 13 19:35:26.185841 sshd-session[6119]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:26.189820 systemd[1]: sshd@22-10.0.0.18:22-10.0.0.1:54944.service: Deactivated successfully. Feb 13 19:35:26.192233 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:35:26.192351 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:35:26.193444 systemd-logind[1555]: Removed session 23.