Feb 13 15:33:08.874912 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:33:08.874933 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:33:08.874945 kernel: BIOS-provided physical RAM map: Feb 13 15:33:08.874951 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:33:08.874957 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:33:08.874963 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:33:08.874970 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 15:33:08.874976 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 15:33:08.874982 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 15:33:08.874990 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 15:33:08.874996 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:33:08.875003 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:33:08.875009 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:33:08.875015 kernel: NX (Execute Disable) protection: active Feb 13 15:33:08.875022 kernel: APIC: Static calls initialized Feb 13 15:33:08.875032 kernel: SMBIOS 2.8 present. Feb 13 15:33:08.875039 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 15:33:08.875045 kernel: Hypervisor detected: KVM Feb 13 15:33:08.875052 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:33:08.875058 kernel: kvm-clock: using sched offset of 2216058662 cycles Feb 13 15:33:08.875065 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:33:08.875072 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:33:08.875079 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:33:08.875086 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:33:08.875093 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 15:33:08.875102 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:33:08.875109 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:33:08.875116 kernel: Using GB pages for direct mapping Feb 13 15:33:08.875123 kernel: ACPI: Early table checksum verification disabled Feb 13 15:33:08.875129 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 15:33:08.875136 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:33:08.875143 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:33:08.875150 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:33:08.875159 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 15:33:08.875166 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:33:08.875172 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:33:08.875179 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:33:08.875186 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:33:08.875193 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 15:33:08.875200 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 15:33:08.875210 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 15:33:08.875219 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 15:33:08.875226 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 15:33:08.875233 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 15:33:08.875240 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 15:33:08.875247 kernel: No NUMA configuration found Feb 13 15:33:08.875254 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 15:33:08.875261 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 15:33:08.875270 kernel: Zone ranges: Feb 13 15:33:08.875277 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:33:08.875284 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 15:33:08.875291 kernel: Normal empty Feb 13 15:33:08.875298 kernel: Movable zone start for each node Feb 13 15:33:08.875305 kernel: Early memory node ranges Feb 13 15:33:08.875312 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:33:08.875319 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 15:33:08.875326 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 15:33:08.875335 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:33:08.875342 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:33:08.875349 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 15:33:08.875357 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:33:08.875364 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:33:08.875384 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:33:08.875391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:33:08.875398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:33:08.875405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:33:08.875415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:33:08.875422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:33:08.875429 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:33:08.875436 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:33:08.875443 kernel: TSC deadline timer available Feb 13 15:33:08.875450 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:33:08.875457 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:33:08.875464 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:33:08.875471 kernel: kvm-guest: setup PV sched yield Feb 13 15:33:08.875478 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 15:33:08.875487 kernel: Booting paravirtualized kernel on KVM Feb 13 15:33:08.875495 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:33:08.875502 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:33:08.875509 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:33:08.875516 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:33:08.875523 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:33:08.875529 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:33:08.875536 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:33:08.875545 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:33:08.875555 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:33:08.875562 kernel: random: crng init done Feb 13 15:33:08.875569 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:33:08.875576 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:33:08.875583 kernel: Fallback order for Node 0: 0 Feb 13 15:33:08.875591 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 15:33:08.875598 kernel: Policy zone: DMA32 Feb 13 15:33:08.875605 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:33:08.875614 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Feb 13 15:33:08.875621 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:33:08.875629 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:33:08.875636 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:33:08.875643 kernel: Dynamic Preempt: voluntary Feb 13 15:33:08.875650 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:33:08.875658 kernel: rcu: RCU event tracing is enabled. Feb 13 15:33:08.875665 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:33:08.875672 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:33:08.875682 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:33:08.875697 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:33:08.875704 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:33:08.875711 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:33:08.875718 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:33:08.875726 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:33:08.875733 kernel: Console: colour VGA+ 80x25 Feb 13 15:33:08.875740 kernel: printk: console [ttyS0] enabled Feb 13 15:33:08.875747 kernel: ACPI: Core revision 20230628 Feb 13 15:33:08.875757 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:33:08.875764 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:33:08.875771 kernel: x2apic enabled Feb 13 15:33:08.875778 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:33:08.875785 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:33:08.875793 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:33:08.875800 kernel: kvm-guest: setup PV IPIs Feb 13 15:33:08.875816 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:33:08.875824 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:33:08.875831 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:33:08.875839 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:33:08.875846 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:33:08.875855 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:33:08.875863 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:33:08.875870 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:33:08.875878 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:33:08.875885 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:33:08.875895 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:33:08.875902 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:33:08.875909 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:33:08.875917 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:33:08.875924 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:33:08.875932 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:33:08.875940 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:33:08.875947 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:33:08.875957 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:33:08.875964 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:33:08.875971 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:33:08.875979 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:33:08.875986 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:33:08.875993 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:33:08.876001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:33:08.876008 kernel: landlock: Up and running. Feb 13 15:33:08.876015 kernel: SELinux: Initializing. Feb 13 15:33:08.876025 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:33:08.876033 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:33:08.876042 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:33:08.876050 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:33:08.876059 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:33:08.876067 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:33:08.876075 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:33:08.876082 kernel: ... version: 0 Feb 13 15:33:08.876089 kernel: ... bit width: 48 Feb 13 15:33:08.876099 kernel: ... generic registers: 6 Feb 13 15:33:08.876107 kernel: ... value mask: 0000ffffffffffff Feb 13 15:33:08.876114 kernel: ... max period: 00007fffffffffff Feb 13 15:33:08.876121 kernel: ... fixed-purpose events: 0 Feb 13 15:33:08.876128 kernel: ... event mask: 000000000000003f Feb 13 15:33:08.876136 kernel: signal: max sigframe size: 1776 Feb 13 15:33:08.876143 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:33:08.876151 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:33:08.876158 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:33:08.876168 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:33:08.876175 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:33:08.876182 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:33:08.876190 kernel: smpboot: Max logical packages: 1 Feb 13 15:33:08.876197 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:33:08.876204 kernel: devtmpfs: initialized Feb 13 15:33:08.876211 kernel: x86/mm: Memory block size: 128MB Feb 13 15:33:08.876219 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:33:08.876226 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:33:08.876236 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:33:08.876243 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:33:08.876251 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:33:08.876259 kernel: audit: type=2000 audit(1739460789.182:1): state=initialized audit_enabled=0 res=1 Feb 13 15:33:08.876266 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:33:08.876273 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:33:08.876281 kernel: cpuidle: using governor menu Feb 13 15:33:08.876288 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:33:08.876295 kernel: dca service started, version 1.12.1 Feb 13 15:33:08.876305 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 15:33:08.876312 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 15:33:08.876319 kernel: PCI: Using configuration type 1 for base access Feb 13 15:33:08.876327 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:33:08.876335 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:33:08.876342 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:33:08.876349 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:33:08.876357 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:33:08.876364 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:33:08.876410 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:33:08.876418 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:33:08.876425 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:33:08.876433 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:33:08.876440 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:33:08.876447 kernel: ACPI: Interpreter enabled Feb 13 15:33:08.876455 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:33:08.876462 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:33:08.876470 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:33:08.876480 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:33:08.876487 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:33:08.876495 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:33:08.876679 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:33:08.876823 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:33:08.876948 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:33:08.876958 kernel: PCI host bridge to bus 0000:00 Feb 13 15:33:08.877089 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:33:08.877202 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:33:08.877314 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:33:08.877444 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 15:33:08.877558 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 15:33:08.877671 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 15:33:08.877804 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:33:08.877963 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:33:08.878107 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:33:08.878233 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 15:33:08.878357 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 15:33:08.878502 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 15:33:08.878627 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:33:08.878769 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:33:08.878901 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 15:33:08.879026 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 15:33:08.879155 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 15:33:08.879292 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:33:08.879436 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 15:33:08.879562 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 15:33:08.879698 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 15:33:08.879835 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:33:08.879960 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 15:33:08.880083 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 15:33:08.880206 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 15:33:08.880330 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 15:33:08.880478 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:33:08.880608 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:33:08.880751 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:33:08.880877 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 15:33:08.881000 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 15:33:08.881132 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:33:08.881256 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 15:33:08.881266 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:33:08.881278 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:33:08.881286 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:33:08.881293 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:33:08.881301 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:33:08.881308 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:33:08.881316 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:33:08.881323 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:33:08.881331 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:33:08.881338 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:33:08.881348 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:33:08.881355 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:33:08.881363 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:33:08.881387 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:33:08.881395 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:33:08.881402 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:33:08.881410 kernel: iommu: Default domain type: Translated Feb 13 15:33:08.881417 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:33:08.881425 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:33:08.881435 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:33:08.881443 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:33:08.881450 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 15:33:08.881576 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:33:08.881708 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:33:08.881833 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:33:08.881843 kernel: vgaarb: loaded Feb 13 15:33:08.881851 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:33:08.881862 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:33:08.881870 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:33:08.881878 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:33:08.881889 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:33:08.881899 kernel: pnp: PnP ACPI init Feb 13 15:33:08.882048 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 15:33:08.882061 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:33:08.882070 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:33:08.882082 kernel: NET: Registered PF_INET protocol family Feb 13 15:33:08.882089 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:33:08.882097 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:33:08.882105 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:33:08.882113 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:33:08.882121 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:33:08.882128 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:33:08.882136 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:33:08.882144 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:33:08.882153 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:33:08.882161 kernel: NET: Registered PF_XDP protocol family Feb 13 15:33:08.882339 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:33:08.882545 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:33:08.882660 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:33:08.882782 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 15:33:08.882894 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 15:33:08.883007 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 15:33:08.883021 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:33:08.883029 kernel: Initialise system trusted keyrings Feb 13 15:33:08.883036 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:33:08.883044 kernel: Key type asymmetric registered Feb 13 15:33:08.883051 kernel: Asymmetric key parser 'x509' registered Feb 13 15:33:08.883059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:33:08.883067 kernel: io scheduler mq-deadline registered Feb 13 15:33:08.883074 kernel: io scheduler kyber registered Feb 13 15:33:08.883082 kernel: io scheduler bfq registered Feb 13 15:33:08.883089 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:33:08.883100 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:33:08.883108 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:33:08.883115 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:33:08.883123 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:33:08.883131 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:33:08.883138 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:33:08.883146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:33:08.883154 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:33:08.883286 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:33:08.883300 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:33:08.883432 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:33:08.883561 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:33:08 UTC (1739460788) Feb 13 15:33:08.883746 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 15:33:08.883757 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:33:08.883764 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:33:08.883772 kernel: Segment Routing with IPv6 Feb 13 15:33:08.883783 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:33:08.883791 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:33:08.883798 kernel: Key type dns_resolver registered Feb 13 15:33:08.883805 kernel: IPI shorthand broadcast: enabled Feb 13 15:33:08.883813 kernel: sched_clock: Marking stable (564002231, 116323851)->(766531029, -86204947) Feb 13 15:33:08.883820 kernel: registered taskstats version 1 Feb 13 15:33:08.883828 kernel: Loading compiled-in X.509 certificates Feb 13 15:33:08.883836 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:33:08.883843 kernel: Key type .fscrypt registered Feb 13 15:33:08.883853 kernel: Key type fscrypt-provisioning registered Feb 13 15:33:08.883861 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:33:08.883868 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:33:08.883876 kernel: ima: No architecture policies found Feb 13 15:33:08.883883 kernel: clk: Disabling unused clocks Feb 13 15:33:08.883891 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:33:08.883907 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:33:08.883915 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:33:08.883923 kernel: Run /init as init process Feb 13 15:33:08.883939 kernel: with arguments: Feb 13 15:33:08.883947 kernel: /init Feb 13 15:33:08.883954 kernel: with environment: Feb 13 15:33:08.883962 kernel: HOME=/ Feb 13 15:33:08.883969 kernel: TERM=linux Feb 13 15:33:08.883977 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:33:08.883987 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:33:08.883996 systemd[1]: Detected virtualization kvm. Feb 13 15:33:08.884007 systemd[1]: Detected architecture x86-64. Feb 13 15:33:08.884015 systemd[1]: Running in initrd. Feb 13 15:33:08.884023 systemd[1]: No hostname configured, using default hostname. Feb 13 15:33:08.884031 systemd[1]: Hostname set to . Feb 13 15:33:08.884039 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:33:08.884047 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:33:08.884056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:33:08.884064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:33:08.884077 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:33:08.884099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:33:08.884110 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:33:08.884119 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:33:08.884128 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:33:08.884139 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:33:08.884147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:33:08.884155 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:33:08.884163 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:33:08.884172 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:33:08.884180 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:33:08.884188 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:33:08.884197 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:33:08.884207 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:33:08.884215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:33:08.884224 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:33:08.884232 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:33:08.884240 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:33:08.884249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:33:08.884257 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:33:08.884265 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:33:08.884274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:33:08.884284 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:33:08.884292 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:33:08.884300 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:33:08.884309 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:33:08.884317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:33:08.884325 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:33:08.884333 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:33:08.884341 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:33:08.884353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:33:08.884393 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 15:33:08.884415 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:33:08.884424 systemd-journald[194]: Journal started Feb 13 15:33:08.884443 systemd-journald[194]: Runtime Journal (/run/log/journal/d276d11fe5814b639b504bb7809d5857) is 6.0M, max 48.4M, 42.3M free. Feb 13 15:33:08.883959 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:33:08.917485 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:33:08.917506 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:33:08.917518 kernel: Bridge firewalling registered Feb 13 15:33:08.917528 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:33:08.910359 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:33:08.928765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:33:08.930076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:33:08.941530 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:33:08.944584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:33:08.947068 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:33:08.948410 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:33:08.965644 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:33:08.970632 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:33:08.976829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:08.978138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:33:08.986144 dracut-cmdline[226]: dracut-dracut-053 Feb 13 15:33:08.989271 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:33:08.987497 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:33:09.021158 systemd-resolved[234]: Positive Trust Anchors: Feb 13 15:33:09.021174 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:33:09.021204 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:33:09.031697 systemd-resolved[234]: Defaulting to hostname 'linux'. Feb 13 15:33:09.033541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:33:09.034032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:33:09.073404 kernel: SCSI subsystem initialized Feb 13 15:33:09.082397 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:33:09.093398 kernel: iscsi: registered transport (tcp) Feb 13 15:33:09.113697 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:33:09.113729 kernel: QLogic iSCSI HBA Driver Feb 13 15:33:09.161424 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:33:09.181551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:33:09.208010 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:33:09.208074 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:33:09.208102 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:33:09.248401 kernel: raid6: avx2x4 gen() 27437 MB/s Feb 13 15:33:09.265402 kernel: raid6: avx2x2 gen() 28180 MB/s Feb 13 15:33:09.282530 kernel: raid6: avx2x1 gen() 24367 MB/s Feb 13 15:33:09.282574 kernel: raid6: using algorithm avx2x2 gen() 28180 MB/s Feb 13 15:33:09.300589 kernel: raid6: .... xor() 18183 MB/s, rmw enabled Feb 13 15:33:09.300637 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:33:09.321421 kernel: xor: automatically using best checksumming function avx Feb 13 15:33:09.472403 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:33:09.484985 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:33:09.501585 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:33:09.514988 systemd-udevd[413]: Using default interface naming scheme 'v255'. Feb 13 15:33:09.520527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:33:09.528523 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:33:09.541922 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 13 15:33:09.572775 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:33:09.587473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:33:09.650047 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:33:09.662486 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:33:09.677439 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:33:09.711448 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:33:09.711603 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:33:09.711623 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:33:09.711634 kernel: GPT:9289727 != 19775487 Feb 13 15:33:09.711644 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:33:09.711655 kernel: GPT:9289727 != 19775487 Feb 13 15:33:09.711664 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:33:09.711682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:33:09.711692 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:33:09.711703 kernel: AES CTR mode by8 optimization enabled Feb 13 15:33:09.679380 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:33:09.715442 kernel: libata version 3.00 loaded. Feb 13 15:33:09.683023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:33:09.684401 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:33:09.690505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:33:09.710179 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:33:09.717736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:33:09.717848 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:33:09.720025 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:33:09.726582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:33:09.726762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:33:09.728151 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:33:09.736822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:33:09.737782 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:33:09.745410 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Feb 13 15:33:09.750100 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (476) Feb 13 15:33:09.750131 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:33:09.764120 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:33:09.764139 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:33:09.764296 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:33:09.764459 kernel: scsi host0: ahci Feb 13 15:33:09.764613 kernel: scsi host1: ahci Feb 13 15:33:09.764768 kernel: scsi host2: ahci Feb 13 15:33:09.764912 kernel: scsi host3: ahci Feb 13 15:33:09.765057 kernel: scsi host4: ahci Feb 13 15:33:09.765211 kernel: scsi host5: ahci Feb 13 15:33:09.765361 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 15:33:09.765423 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 15:33:09.765434 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 15:33:09.765444 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 15:33:09.765454 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 15:33:09.765464 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 15:33:09.755270 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:33:09.771098 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:33:09.808004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:33:09.819329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:33:09.825535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:33:09.828062 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:33:09.843531 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:33:09.846601 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:33:09.853349 disk-uuid[568]: Primary Header is updated. Feb 13 15:33:09.853349 disk-uuid[568]: Secondary Entries is updated. Feb 13 15:33:09.853349 disk-uuid[568]: Secondary Header is updated. Feb 13 15:33:09.856402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:33:09.875445 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:33:10.076028 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:33:10.076101 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:33:10.076112 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:33:10.077433 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:33:10.078407 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:33:10.078434 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:33:10.079700 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:33:10.079725 kernel: ata3.00: applying bridge limits Feb 13 15:33:10.080700 kernel: ata3.00: configured for UDMA/100 Feb 13 15:33:10.081398 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:33:10.122407 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:33:10.136290 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:33:10.136309 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:33:10.865403 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:33:10.865788 disk-uuid[570]: The operation has completed successfully. Feb 13 15:33:10.893304 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:33:10.893485 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:33:10.925552 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:33:10.931551 sh[594]: Success Feb 13 15:33:10.945392 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:33:10.978852 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:33:10.996000 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:33:10.999307 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:33:11.013771 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:33:11.013813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:33:11.013825 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:33:11.015541 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:33:11.015554 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:33:11.020142 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:33:11.022456 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:33:11.035489 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:33:11.037986 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:33:11.046692 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:33:11.046729 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:33:11.046740 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:33:11.049400 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:33:11.058769 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:33:11.061183 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:33:11.071030 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:33:11.078513 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:33:11.130579 ignition[692]: Ignition 2.20.0 Feb 13 15:33:11.130593 ignition[692]: Stage: fetch-offline Feb 13 15:33:11.130627 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:11.130644 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:11.130732 ignition[692]: parsed url from cmdline: "" Feb 13 15:33:11.130736 ignition[692]: no config URL provided Feb 13 15:33:11.130741 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:33:11.130750 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:33:11.130776 ignition[692]: op(1): [started] loading QEMU firmware config module Feb 13 15:33:11.130781 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:33:11.139203 ignition[692]: op(1): [finished] loading QEMU firmware config module Feb 13 15:33:11.155873 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:33:11.172574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:33:11.183530 ignition[692]: parsing config with SHA512: 343cdd6510593293eaaed29b25e7829ba12f6ea66c3e482259677519906e295667fc262e3ab266c1e387b41aec1220bad0d7b069099def8b53e62553dd0a3eb4 Feb 13 15:33:11.191001 unknown[692]: fetched base config from "system" Feb 13 15:33:11.191022 unknown[692]: fetched user config from "qemu" Feb 13 15:33:11.191608 ignition[692]: fetch-offline: fetch-offline passed Feb 13 15:33:11.191735 ignition[692]: Ignition finished successfully Feb 13 15:33:11.194666 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:33:11.201970 systemd-networkd[783]: lo: Link UP Feb 13 15:33:11.201981 systemd-networkd[783]: lo: Gained carrier Feb 13 15:33:11.203983 systemd-networkd[783]: Enumeration completed Feb 13 15:33:11.204104 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:33:11.204474 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:33:11.204479 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:33:11.205575 systemd-networkd[783]: eth0: Link UP Feb 13 15:33:11.205580 systemd-networkd[783]: eth0: Gained carrier Feb 13 15:33:11.205588 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:33:11.206688 systemd[1]: Reached target network.target - Network. Feb 13 15:33:11.208672 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:33:11.218552 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:33:11.225449 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:33:11.231891 ignition[786]: Ignition 2.20.0 Feb 13 15:33:11.231904 ignition[786]: Stage: kargs Feb 13 15:33:11.232084 ignition[786]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:11.232100 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:11.233029 ignition[786]: kargs: kargs passed Feb 13 15:33:11.233074 ignition[786]: Ignition finished successfully Feb 13 15:33:11.240010 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:33:11.255527 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:33:11.268427 ignition[795]: Ignition 2.20.0 Feb 13 15:33:11.268443 ignition[795]: Stage: disks Feb 13 15:33:11.268602 ignition[795]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:11.268614 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:11.271609 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:33:11.269481 ignition[795]: disks: disks passed Feb 13 15:33:11.273229 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:33:11.269524 ignition[795]: Ignition finished successfully Feb 13 15:33:11.275141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:33:11.277000 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:33:11.279064 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:33:11.279671 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:33:11.286632 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:33:11.299283 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:33:11.445211 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:33:11.456501 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:33:11.539326 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:33:11.542019 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:33:11.540646 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:33:11.560444 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:33:11.562166 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:33:11.562835 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:33:11.568433 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Feb 13 15:33:11.562872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:33:11.574249 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:33:11.574266 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:33:11.574277 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:33:11.562892 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:33:11.577110 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:33:11.572499 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:33:11.575327 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:33:11.578461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:33:11.609777 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:33:11.613366 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:33:11.617014 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:33:11.621187 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:33:11.698830 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:33:11.707582 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:33:11.710473 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:33:11.715392 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:33:11.731477 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:33:11.733442 ignition[925]: INFO : Ignition 2.20.0 Feb 13 15:33:11.733442 ignition[925]: INFO : Stage: mount Feb 13 15:33:11.733442 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:11.733442 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:11.738116 ignition[925]: INFO : mount: mount passed Feb 13 15:33:11.738116 ignition[925]: INFO : Ignition finished successfully Feb 13 15:33:11.735893 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:33:11.747508 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:33:12.012950 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:33:12.022600 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:33:12.030945 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Feb 13 15:33:12.030967 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:33:12.030978 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:33:12.032410 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:33:12.034388 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:33:12.035948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:33:12.059049 ignition[956]: INFO : Ignition 2.20.0 Feb 13 15:33:12.059049 ignition[956]: INFO : Stage: files Feb 13 15:33:12.060896 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:12.060896 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:12.060896 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:33:12.064401 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:33:12.064401 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:33:12.067030 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:33:12.067030 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:33:12.067030 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:33:12.066468 unknown[956]: wrote ssh authorized keys file for user: core Feb 13 15:33:12.071991 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:33:12.071991 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:33:12.115059 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:33:12.940529 systemd-networkd[783]: eth0: Gained IPv6LL Feb 13 15:33:12.959898 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:33:12.961929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:33:13.319798 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:33:13.694968 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:33:13.694968 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:33:13.699062 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:33:13.701281 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:33:13.701281 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:33:13.701281 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:33:13.705585 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:33:13.707503 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:33:13.707503 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:33:13.710628 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:33:13.731898 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:33:13.737133 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:33:13.738727 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:33:13.738727 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:33:13.738727 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:33:13.738727 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:33:13.738727 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:33:13.738727 ignition[956]: INFO : files: files passed Feb 13 15:33:13.738727 ignition[956]: INFO : Ignition finished successfully Feb 13 15:33:13.749915 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:33:13.756529 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:33:13.757394 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:33:13.764681 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:33:13.764796 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:33:13.767840 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:33:13.770971 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:33:13.770971 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:33:13.774152 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:33:13.776708 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:33:13.777117 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:33:13.789509 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:33:13.811710 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:33:13.811831 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:33:13.812360 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:33:13.815238 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:33:13.817346 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:33:13.820454 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:33:13.837761 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:33:13.839352 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:33:13.859438 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:33:13.859773 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:33:13.861960 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:33:13.864139 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:33:13.864243 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:33:13.867684 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:33:13.869752 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:33:13.870289 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:33:13.870638 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:33:13.870968 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:33:13.876672 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:33:13.878717 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:33:13.880807 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:33:13.882948 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:33:13.884856 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:33:13.886682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:33:13.886785 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:33:13.889730 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:33:13.890267 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:33:13.892977 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:33:13.895715 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:33:13.898219 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:33:13.898326 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:33:13.901074 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:33:13.901182 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:33:13.903261 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:33:13.905067 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:33:13.909429 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:33:13.909955 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:33:13.910259 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:33:13.913941 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:33:13.914030 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:33:13.915863 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:33:13.915949 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:33:13.917429 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:33:13.917538 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:33:13.919425 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:33:13.919524 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:33:13.935517 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:33:13.935785 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:33:13.935891 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:33:13.938430 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:33:13.939823 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:33:13.939974 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:33:13.941863 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:33:13.941969 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:33:13.948922 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:33:13.949029 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:33:13.963731 ignition[1011]: INFO : Ignition 2.20.0 Feb 13 15:33:13.963731 ignition[1011]: INFO : Stage: umount Feb 13 15:33:13.966457 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:33:13.966457 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:33:13.966457 ignition[1011]: INFO : umount: umount passed Feb 13 15:33:13.966457 ignition[1011]: INFO : Ignition finished successfully Feb 13 15:33:13.965640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:33:13.971657 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:33:13.972693 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:33:13.975348 systemd[1]: Stopped target network.target - Network. Feb 13 15:33:13.977096 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:33:13.978039 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:33:13.980024 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:33:13.980081 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:33:13.983002 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:33:13.983053 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:33:13.985960 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:33:13.986975 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:33:13.989179 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:33:13.991512 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:33:13.993403 systemd-networkd[783]: eth0: DHCPv6 lease lost Feb 13 15:33:13.995319 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:33:13.996527 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:33:13.999083 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:33:13.999134 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:33:14.008514 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:33:14.010460 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:33:14.010516 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:33:14.013968 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:33:14.016736 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:33:14.017780 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:33:14.026629 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:33:14.027596 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:14.029700 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:33:14.029755 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:33:14.032932 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:33:14.032982 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:33:14.036719 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:33:14.037787 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:33:14.040606 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:33:14.041640 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:33:14.044684 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:33:14.045811 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:33:14.047910 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:33:14.047956 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:33:14.050890 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:33:14.050943 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:33:14.053949 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:33:14.054869 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:33:14.056920 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:33:14.057903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:33:14.070504 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:33:14.072740 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:33:14.072799 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:33:14.076286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:33:14.077335 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:33:14.079921 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:33:14.081055 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:33:14.190960 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:33:14.192043 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:33:14.194540 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:33:14.196696 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:33:14.196770 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:33:14.209502 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:33:14.218044 systemd[1]: Switching root. Feb 13 15:33:14.246931 systemd-journald[194]: Journal stopped Feb 13 15:33:15.600262 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 15:33:15.600323 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:33:15.600345 kernel: SELinux: policy capability open_perms=1 Feb 13 15:33:15.600356 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:33:15.600385 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:33:15.600398 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:33:15.600414 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:33:15.600425 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:33:15.600437 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:33:15.600448 kernel: audit: type=1403 audit(1739460794.897:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:33:15.600465 systemd[1]: Successfully loaded SELinux policy in 39.671ms. Feb 13 15:33:15.600486 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.621ms. Feb 13 15:33:15.600499 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:33:15.600519 systemd[1]: Detected virtualization kvm. Feb 13 15:33:15.600531 systemd[1]: Detected architecture x86-64. Feb 13 15:33:15.600552 systemd[1]: Detected first boot. Feb 13 15:33:15.600577 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:33:15.600590 zram_generator::config[1057]: No configuration found. Feb 13 15:33:15.600603 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:33:15.600615 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:33:15.600630 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:33:15.600642 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:33:15.600654 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:33:15.600666 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:33:15.600682 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:33:15.600694 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:33:15.600707 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:33:15.600720 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:33:15.600734 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:33:15.600746 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:33:15.600757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:33:15.600769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:33:15.600781 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:33:15.600793 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:33:15.600805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:33:15.600817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:33:15.600829 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:33:15.600843 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:33:15.600855 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:33:15.600867 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:33:15.600880 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:33:15.600891 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:33:15.600903 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:33:15.600915 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:33:15.600927 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:33:15.600946 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:33:15.600958 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:33:15.600971 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:33:15.600982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:33:15.600995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:33:15.601006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:33:15.601019 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:33:15.601030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:33:15.601042 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:33:15.601061 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:33:15.601078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:33:15.601093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:33:15.601108 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:33:15.601124 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:33:15.601140 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:33:15.601154 systemd[1]: Reached target machines.target - Containers. Feb 13 15:33:15.601165 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:33:15.601180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:33:15.601192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:33:15.601204 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:33:15.601216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:33:15.601228 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:33:15.601240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:33:15.601253 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:33:15.601265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:33:15.601277 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:33:15.601292 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:33:15.601303 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:33:15.601315 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:33:15.601327 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:33:15.601338 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:33:15.601350 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:33:15.601362 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:33:15.604394 systemd-journald[1120]: Collecting audit messages is disabled. Feb 13 15:33:15.604425 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:33:15.604437 kernel: loop: module loaded Feb 13 15:33:15.604449 systemd-journald[1120]: Journal started Feb 13 15:33:15.604471 systemd-journald[1120]: Runtime Journal (/run/log/journal/d276d11fe5814b639b504bb7809d5857) is 6.0M, max 48.4M, 42.3M free. Feb 13 15:33:15.393679 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:33:15.412741 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:33:15.413183 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:33:15.608557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:33:15.608588 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:33:15.608607 systemd[1]: Stopped verity-setup.service. Feb 13 15:33:15.610407 kernel: fuse: init (API version 7.39) Feb 13 15:33:15.614412 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:33:15.622468 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:33:15.623233 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:33:15.624506 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:33:15.625830 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:33:15.627121 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:33:15.628327 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:33:15.629588 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:33:15.630905 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:33:15.634880 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:33:15.635073 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:33:15.636671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:33:15.636848 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:33:15.638292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:33:15.638471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:33:15.640093 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:33:15.640285 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:33:15.641927 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:33:15.642092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:33:15.643485 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:33:15.644895 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:33:15.646420 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:33:15.655399 kernel: ACPI: bus type drm_connector registered Feb 13 15:33:15.655996 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:33:15.656179 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:33:15.661192 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:33:15.670493 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:33:15.672742 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:33:15.673900 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:33:15.673929 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:33:15.675892 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:33:15.678106 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:33:15.681888 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:33:15.683181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:33:15.685054 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:33:15.690196 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:33:15.691749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:33:15.694945 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:33:15.696173 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:33:15.697485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:33:15.702350 systemd-journald[1120]: Time spent on flushing to /var/log/journal/d276d11fe5814b639b504bb7809d5857 is 29.208ms for 946 entries. Feb 13 15:33:15.702350 systemd-journald[1120]: System Journal (/var/log/journal/d276d11fe5814b639b504bb7809d5857) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:33:15.788831 systemd-journald[1120]: Received client request to flush runtime journal. Feb 13 15:33:15.788875 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 15:33:15.788901 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:33:15.702688 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:33:15.706483 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:33:15.708032 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:33:15.709543 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:33:15.724933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:33:15.727757 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:33:15.759024 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:33:15.759931 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:33:15.761648 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:33:15.768520 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:33:15.770351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:33:15.779730 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:33:15.791586 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:33:15.793118 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:33:15.801923 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:33:15.803429 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:33:15.812407 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 15:33:15.819007 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:33:15.828609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:33:15.842546 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 15:33:15.854015 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:33:15.854033 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:33:15.860205 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:33:15.893403 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 15:33:15.904387 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 15:33:15.914408 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 15:33:15.923626 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:33:15.924396 (sd-merge)[1197]: Merged extensions into '/usr'. Feb 13 15:33:15.930242 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:33:15.930260 systemd[1]: Reloading... Feb 13 15:33:15.996404 zram_generator::config[1223]: No configuration found. Feb 13 15:33:16.048259 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:33:16.115674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:16.164636 systemd[1]: Reloading finished in 233 ms. Feb 13 15:33:16.198070 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:33:16.199562 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:33:16.214554 systemd[1]: Starting ensure-sysext.service... Feb 13 15:33:16.216541 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:33:16.223537 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:33:16.223554 systemd[1]: Reloading... Feb 13 15:33:16.239106 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:33:16.239801 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:33:16.240912 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:33:16.241284 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Feb 13 15:33:16.241425 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Feb 13 15:33:16.245701 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:33:16.245714 systemd-tmpfiles[1262]: Skipping /boot Feb 13 15:33:16.259289 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:33:16.259429 systemd-tmpfiles[1262]: Skipping /boot Feb 13 15:33:16.282402 zram_generator::config[1293]: No configuration found. Feb 13 15:33:16.379971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:16.428877 systemd[1]: Reloading finished in 204 ms. Feb 13 15:33:16.447558 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:33:16.459103 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:33:16.467507 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:33:16.469975 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:33:16.472697 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:33:16.478596 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:33:16.482694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:33:16.488988 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:33:16.494008 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:33:16.494233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:33:16.498656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:33:16.501271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:33:16.503884 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:33:16.505194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:33:16.507342 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:33:16.508598 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:33:16.509562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:33:16.509733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:33:16.511767 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:33:16.511927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:33:16.524151 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:33:16.528752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:33:16.529131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:33:16.532394 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Feb 13 15:33:16.537471 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:33:16.541455 augenrules[1362]: No rules Feb 13 15:33:16.542571 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:33:16.542869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:33:16.551607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:33:16.555719 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:33:16.561550 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:33:16.565604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:33:16.567623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:33:16.569909 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:33:16.571688 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:33:16.573445 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:33:16.576291 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:33:16.577907 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:33:16.578143 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:33:16.580962 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:33:16.584116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:33:16.584326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:33:16.586789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:33:16.586972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:33:16.589310 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:33:16.589652 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:33:16.602386 systemd[1]: Finished ensure-sysext.service. Feb 13 15:33:16.604885 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:33:16.605436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:33:16.625532 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:33:16.626657 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:33:16.626717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:33:16.628651 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:33:16.630519 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1389) Feb 13 15:33:16.633513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:33:16.633885 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:33:16.663319 systemd-resolved[1331]: Positive Trust Anchors: Feb 13 15:33:16.663668 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:33:16.663749 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:33:16.667538 systemd-resolved[1331]: Defaulting to hostname 'linux'. Feb 13 15:33:16.669517 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:33:16.673610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:33:16.675549 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:33:16.690498 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:33:16.700626 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:33:16.702712 systemd-networkd[1406]: lo: Link UP Feb 13 15:33:16.702725 systemd-networkd[1406]: lo: Gained carrier Feb 13 15:33:16.705137 systemd-networkd[1406]: Enumeration completed Feb 13 15:33:16.705568 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:33:16.705577 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:33:16.705766 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:33:16.707000 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:33:16.707092 systemd-networkd[1406]: eth0: Link UP Feb 13 15:33:16.707101 systemd-networkd[1406]: eth0: Gained carrier Feb 13 15:33:16.707113 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:33:16.710519 systemd[1]: Reached target network.target - Network. Feb 13 15:33:16.713857 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:33:16.718634 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:33:16.718948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:33:16.719594 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:33:16.720112 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Feb 13 15:33:16.721699 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:33:18.054959 systemd-resolved[1331]: Clock change detected. Flushing caches. Feb 13 15:33:18.055070 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:33:18.055120 systemd-timesyncd[1408]: Initial clock synchronization to Thu 2025-02-13 15:33:18.054905 UTC. Feb 13 15:33:18.057672 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:33:18.062543 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:33:18.090907 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:33:18.091135 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:33:18.091155 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:33:18.099837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:33:18.104496 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:33:18.198872 kernel: kvm_amd: TSC scaling supported Feb 13 15:33:18.198922 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:33:18.198935 kernel: kvm_amd: Nested Paging enabled Feb 13 15:33:18.198948 kernel: kvm_amd: LBR virtualization supported Feb 13 15:33:18.199396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:33:18.199953 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:33:18.199968 kernel: kvm_amd: Virtual GIF supported Feb 13 15:33:18.219521 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:33:18.250853 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:33:18.266623 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:33:18.274356 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:33:18.303135 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:33:18.304848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:33:18.306031 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:33:18.307257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:33:18.308584 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:33:18.310079 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:33:18.311342 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:33:18.312671 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:33:18.313960 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:33:18.313991 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:33:18.314939 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:33:18.316739 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:33:18.319840 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:33:18.332510 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:33:18.335267 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:33:18.336959 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:33:18.338171 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:33:18.339140 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:33:18.340110 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:33:18.340140 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:33:18.341111 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:33:18.343231 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:33:18.347573 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:33:18.350942 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:33:18.352094 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:33:18.353695 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:33:18.354070 jq[1437]: false Feb 13 15:33:18.356108 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:33:18.360963 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:33:18.364626 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:33:18.368652 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:33:18.373322 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:33:18.374775 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:33:18.375173 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:33:18.376915 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:33:18.379666 extend-filesystems[1438]: Found loop3 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found loop4 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found loop5 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found sr0 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found vda Feb 13 15:33:18.379666 extend-filesystems[1438]: Found vda1 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found vda2 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found vda3 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found usr Feb 13 15:33:18.379666 extend-filesystems[1438]: Found vda4 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found vda6 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found vda7 Feb 13 15:33:18.379666 extend-filesystems[1438]: Found vda9 Feb 13 15:33:18.379666 extend-filesystems[1438]: Checking size of /dev/vda9 Feb 13 15:33:18.435055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1393) Feb 13 15:33:18.435088 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:33:18.385059 dbus-daemon[1436]: [system] SELinux support is enabled Feb 13 15:33:18.450029 extend-filesystems[1438]: Resized partition /dev/vda9 Feb 13 15:33:18.454664 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:33:18.381110 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:33:18.484678 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:33:18.484678 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:33:18.484678 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:33:18.484678 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:33:18.504195 update_engine[1450]: I20250213 15:33:18.402441 1450 main.cc:92] Flatcar Update Engine starting Feb 13 15:33:18.504195 update_engine[1450]: I20250213 15:33:18.403607 1450 update_check_scheduler.cc:74] Next update check in 8m33s Feb 13 15:33:18.385767 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:33:18.505048 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Feb 13 15:33:18.391766 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:33:18.511947 jq[1451]: true Feb 13 15:33:18.395118 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:33:18.395380 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:33:18.395806 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:33:18.512633 jq[1459]: true Feb 13 15:33:18.396055 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:33:18.400020 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:33:18.513014 tar[1458]: linux-amd64/helm Feb 13 15:33:18.400281 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:33:18.419352 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:33:18.513755 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:33:18.445518 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:33:18.445567 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:33:18.446865 systemd-logind[1449]: New seat seat0. Feb 13 15:33:18.449721 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:33:18.457933 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:33:18.463217 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:33:18.463410 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:33:18.465299 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:33:18.465443 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:33:18.474879 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:33:18.490094 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:33:18.490724 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:33:18.509781 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:33:18.520429 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:33:18.524341 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:33:18.544845 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:33:18.575391 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:33:18.585144 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:33:18.592563 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:33:18.592876 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:33:18.599810 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:33:18.613471 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:33:18.622929 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:33:18.626755 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:33:18.628281 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:33:18.660904 containerd[1466]: time="2025-02-13T15:33:18.660830590Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:33:18.684918 containerd[1466]: time="2025-02-13T15:33:18.684631004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.686591120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.686625264Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.686644981Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.686889700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.686929905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.687015947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.687032518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.687273901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.687294309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.687310880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688330 containerd[1466]: time="2025-02-13T15:33:18.687323504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688697 containerd[1466]: time="2025-02-13T15:33:18.687442757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688697 containerd[1466]: time="2025-02-13T15:33:18.687771945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688697 containerd[1466]: time="2025-02-13T15:33:18.687932656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:33:18.688697 containerd[1466]: time="2025-02-13T15:33:18.687951060Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:33:18.688697 containerd[1466]: time="2025-02-13T15:33:18.688074261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:33:18.688697 containerd[1466]: time="2025-02-13T15:33:18.688146567Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:33:18.696139 containerd[1466]: time="2025-02-13T15:33:18.696107817Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:33:18.696212 containerd[1466]: time="2025-02-13T15:33:18.696160316Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:33:18.696212 containerd[1466]: time="2025-02-13T15:33:18.696177778Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:33:18.696212 containerd[1466]: time="2025-02-13T15:33:18.696194921Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:33:18.696212 containerd[1466]: time="2025-02-13T15:33:18.696209097Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:33:18.696373 containerd[1466]: time="2025-02-13T15:33:18.696353238Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:33:18.696756 containerd[1466]: time="2025-02-13T15:33:18.696735264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:33:18.696925 containerd[1466]: time="2025-02-13T15:33:18.696852253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:33:18.696925 containerd[1466]: time="2025-02-13T15:33:18.696870037Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:33:18.696925 containerd[1466]: time="2025-02-13T15:33:18.696884454Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:33:18.696925 containerd[1466]: time="2025-02-13T15:33:18.696897869Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:33:18.696925 containerd[1466]: time="2025-02-13T15:33:18.696910012Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:33:18.696925 containerd[1466]: time="2025-02-13T15:33:18.696921163Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.696938245Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.696952271Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.696963482Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.696975274Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.696986445Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.697012764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.697025148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.697037290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.697058590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697072 containerd[1466]: time="2025-02-13T15:33:18.697070573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697081974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697093897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697105298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697118994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697132669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697143309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697155893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697167304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697180329Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697199054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697210796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697229932Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697279905Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:33:18.697303 containerd[1466]: time="2025-02-13T15:33:18.697295044Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:33:18.697660 containerd[1466]: time="2025-02-13T15:33:18.697304742Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:33:18.697660 containerd[1466]: time="2025-02-13T15:33:18.697315612Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:33:18.697660 containerd[1466]: time="2025-02-13T15:33:18.697324569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697660 containerd[1466]: time="2025-02-13T15:33:18.697335880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:33:18.697660 containerd[1466]: time="2025-02-13T15:33:18.697344847Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:33:18.697660 containerd[1466]: time="2025-02-13T15:33:18.697354034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:33:18.697809 containerd[1466]: time="2025-02-13T15:33:18.697665448Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:33:18.697809 containerd[1466]: time="2025-02-13T15:33:18.697710793Z" level=info msg="Connect containerd service" Feb 13 15:33:18.697809 containerd[1466]: time="2025-02-13T15:33:18.697736882Z" level=info msg="using legacy CRI server" Feb 13 15:33:18.697809 containerd[1466]: time="2025-02-13T15:33:18.697742884Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:33:18.698026 containerd[1466]: time="2025-02-13T15:33:18.697839595Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:33:18.698488 containerd[1466]: time="2025-02-13T15:33:18.698447766Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:33:18.698763 containerd[1466]: time="2025-02-13T15:33:18.698648081Z" level=info msg="Start subscribing containerd event" Feb 13 15:33:18.698763 containerd[1466]: time="2025-02-13T15:33:18.698698486Z" level=info msg="Start recovering state" Feb 13 15:33:18.698823 containerd[1466]: time="2025-02-13T15:33:18.698797672Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:33:18.698897 containerd[1466]: time="2025-02-13T15:33:18.698875357Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:33:18.699450 containerd[1466]: time="2025-02-13T15:33:18.699384422Z" level=info msg="Start event monitor" Feb 13 15:33:18.699450 containerd[1466]: time="2025-02-13T15:33:18.699407495Z" level=info msg="Start snapshots syncer" Feb 13 15:33:18.699450 containerd[1466]: time="2025-02-13T15:33:18.699416562Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:33:18.699975 containerd[1466]: time="2025-02-13T15:33:18.699436149Z" level=info msg="Start streaming server" Feb 13 15:33:18.700536 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:33:18.701653 containerd[1466]: time="2025-02-13T15:33:18.701621067Z" level=info msg="containerd successfully booted in 0.042724s" Feb 13 15:33:18.870696 tar[1458]: linux-amd64/LICENSE Feb 13 15:33:18.870696 tar[1458]: linux-amd64/README.md Feb 13 15:33:18.888850 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:33:18.921764 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:33:18.932087 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:59752.service - OpenSSH per-connection server daemon (10.0.0.1:59752). Feb 13 15:33:18.990043 sshd[1529]: Accepted publickey for core from 10.0.0.1 port 59752 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:18.991875 sshd-session[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:18.999530 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:33:19.016673 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:33:19.019379 systemd-logind[1449]: New session 1 of user core. Feb 13 15:33:19.029463 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:33:19.044671 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:33:19.048131 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:33:19.148992 systemd[1533]: Queued start job for default target default.target. Feb 13 15:33:19.160751 systemd[1533]: Created slice app.slice - User Application Slice. Feb 13 15:33:19.160775 systemd[1533]: Reached target paths.target - Paths. Feb 13 15:33:19.160789 systemd[1533]: Reached target timers.target - Timers. Feb 13 15:33:19.162344 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:33:19.174202 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:33:19.174368 systemd[1533]: Reached target sockets.target - Sockets. Feb 13 15:33:19.174394 systemd[1533]: Reached target basic.target - Basic System. Feb 13 15:33:19.174442 systemd[1533]: Reached target default.target - Main User Target. Feb 13 15:33:19.174498 systemd[1533]: Startup finished in 120ms. Feb 13 15:33:19.174840 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:33:19.177408 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:33:19.239267 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:59758.service - OpenSSH per-connection server daemon (10.0.0.1:59758). Feb 13 15:33:19.289073 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 59758 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:19.290596 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:19.294536 systemd-logind[1449]: New session 2 of user core. Feb 13 15:33:19.304644 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:33:19.359148 sshd[1546]: Connection closed by 10.0.0.1 port 59758 Feb 13 15:33:19.359510 sshd-session[1544]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:19.373992 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:59758.service: Deactivated successfully. Feb 13 15:33:19.375436 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:33:19.376629 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:33:19.377734 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:59764.service - OpenSSH per-connection server daemon (10.0.0.1:59764). Feb 13 15:33:19.379752 systemd-logind[1449]: Removed session 2. Feb 13 15:33:19.421665 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 59764 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:19.423147 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:19.426939 systemd-logind[1449]: New session 3 of user core. Feb 13 15:33:19.440614 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:33:19.457585 systemd-networkd[1406]: eth0: Gained IPv6LL Feb 13 15:33:19.460666 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:33:19.462516 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:33:19.475690 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:33:19.478195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:19.480630 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:33:19.497167 sshd[1553]: Connection closed by 10.0.0.1 port 59764 Feb 13 15:33:19.497696 sshd-session[1551]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:19.501765 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:33:19.502075 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:33:19.504148 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:59764.service: Deactivated successfully. Feb 13 15:33:19.506572 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:33:19.508125 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:33:19.508940 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:33:19.511814 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:33:19.512273 systemd-logind[1449]: Removed session 3. Feb 13 15:33:20.113357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:20.115101 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:33:20.119577 systemd[1]: Startup finished in 692ms (kernel) + 6.208s (initrd) + 3.927s (userspace) = 10.829s. Feb 13 15:33:20.134280 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:20.571952 kubelet[1579]: E0213 15:33:20.571817 1579 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:20.575013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:20.575219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:29.510377 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:41734.service - OpenSSH per-connection server daemon (10.0.0.1:41734). Feb 13 15:33:29.553266 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 41734 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:29.554944 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:29.559131 systemd-logind[1449]: New session 4 of user core. Feb 13 15:33:29.569795 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:33:29.624447 sshd[1595]: Connection closed by 10.0.0.1 port 41734 Feb 13 15:33:29.624934 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:29.636511 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:41734.service: Deactivated successfully. Feb 13 15:33:29.638086 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:33:29.639641 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:33:29.649836 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:41740.service - OpenSSH per-connection server daemon (10.0.0.1:41740). Feb 13 15:33:29.651087 systemd-logind[1449]: Removed session 4. Feb 13 15:33:29.687179 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 41740 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:29.688717 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:29.692575 systemd-logind[1449]: New session 5 of user core. Feb 13 15:33:29.703673 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:33:29.753116 sshd[1602]: Connection closed by 10.0.0.1 port 41740 Feb 13 15:33:29.753436 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:29.766245 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:41740.service: Deactivated successfully. Feb 13 15:33:29.767902 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:33:29.769726 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:33:29.770991 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:41744.service - OpenSSH per-connection server daemon (10.0.0.1:41744). Feb 13 15:33:29.771913 systemd-logind[1449]: Removed session 5. Feb 13 15:33:29.816592 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 41744 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:29.818107 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:29.822125 systemd-logind[1449]: New session 6 of user core. Feb 13 15:33:29.831597 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:33:29.886669 sshd[1609]: Connection closed by 10.0.0.1 port 41744 Feb 13 15:33:29.887091 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:29.904467 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:41744.service: Deactivated successfully. Feb 13 15:33:29.906218 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:33:29.907649 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:33:29.914802 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:41756.service - OpenSSH per-connection server daemon (10.0.0.1:41756). Feb 13 15:33:29.915987 systemd-logind[1449]: Removed session 6. Feb 13 15:33:29.952578 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 41756 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:29.954113 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:29.958865 systemd-logind[1449]: New session 7 of user core. Feb 13 15:33:29.977764 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:33:30.036193 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:33:30.036573 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:30.061282 sudo[1617]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:30.063207 sshd[1616]: Connection closed by 10.0.0.1 port 41756 Feb 13 15:33:30.063631 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:30.077131 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:41756.service: Deactivated successfully. Feb 13 15:33:30.079517 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:33:30.081440 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:33:30.090786 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:41762.service - OpenSSH per-connection server daemon (10.0.0.1:41762). Feb 13 15:33:30.091578 systemd-logind[1449]: Removed session 7. Feb 13 15:33:30.128976 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 41762 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:30.130401 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:30.134661 systemd-logind[1449]: New session 8 of user core. Feb 13 15:33:30.150605 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:33:30.204168 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:33:30.204525 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:30.207923 sudo[1626]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:30.214069 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:33:30.214410 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:30.231742 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:33:30.262744 augenrules[1648]: No rules Feb 13 15:33:30.263824 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:33:30.264118 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:33:30.265534 sudo[1625]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:30.267064 sshd[1624]: Connection closed by 10.0.0.1 port 41762 Feb 13 15:33:30.267447 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:30.284753 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:41762.service: Deactivated successfully. Feb 13 15:33:30.286624 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:33:30.288663 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:33:30.303918 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:41774.service - OpenSSH per-connection server daemon (10.0.0.1:41774). Feb 13 15:33:30.304978 systemd-logind[1449]: Removed session 8. Feb 13 15:33:30.345457 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 41774 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:33:30.346976 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:30.351689 systemd-logind[1449]: New session 9 of user core. Feb 13 15:33:30.361668 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:33:30.414523 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:33:30.414852 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:30.825613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:33:30.837726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:30.997667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:31.007995 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:31.083697 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:33:31.083848 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:33:31.186774 kubelet[1686]: E0213 15:33:31.186696 1686 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:31.194061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:31.194300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:31.666545 dockerd[1693]: time="2025-02-13T15:33:31.666428669Z" level=info msg="Starting up" Feb 13 15:33:32.630074 dockerd[1693]: time="2025-02-13T15:33:32.630019956Z" level=info msg="Loading containers: start." Feb 13 15:33:33.002517 kernel: Initializing XFRM netlink socket Feb 13 15:33:33.095143 systemd-networkd[1406]: docker0: Link UP Feb 13 15:33:33.212511 dockerd[1693]: time="2025-02-13T15:33:33.212435383Z" level=info msg="Loading containers: done." Feb 13 15:33:33.233327 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2367765196-merged.mount: Deactivated successfully. Feb 13 15:33:33.333077 dockerd[1693]: time="2025-02-13T15:33:33.332987311Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:33:33.333270 dockerd[1693]: time="2025-02-13T15:33:33.333145979Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:33:33.333365 dockerd[1693]: time="2025-02-13T15:33:33.333332498Z" level=info msg="Daemon has completed initialization" Feb 13 15:33:33.507191 dockerd[1693]: time="2025-02-13T15:33:33.507009875Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:33:33.507591 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:33:34.427168 containerd[1466]: time="2025-02-13T15:33:34.427108268Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:33:35.320430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3784418394.mount: Deactivated successfully. Feb 13 15:33:36.737529 containerd[1466]: time="2025-02-13T15:33:36.737430607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:36.747170 containerd[1466]: time="2025-02-13T15:33:36.747059505Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 15:33:36.749681 containerd[1466]: time="2025-02-13T15:33:36.749643260Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:36.756114 containerd[1466]: time="2025-02-13T15:33:36.756039997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:36.757368 containerd[1466]: time="2025-02-13T15:33:36.757312132Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 2.330156034s" Feb 13 15:33:36.757434 containerd[1466]: time="2025-02-13T15:33:36.757379609Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:33:36.784825 containerd[1466]: time="2025-02-13T15:33:36.784782028Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:33:38.774741 containerd[1466]: time="2025-02-13T15:33:38.774674397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:38.775552 containerd[1466]: time="2025-02-13T15:33:38.775500806Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 15:33:38.776821 containerd[1466]: time="2025-02-13T15:33:38.776794042Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:38.779798 containerd[1466]: time="2025-02-13T15:33:38.779726451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:38.780675 containerd[1466]: time="2025-02-13T15:33:38.780641136Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.995817079s" Feb 13 15:33:38.780675 containerd[1466]: time="2025-02-13T15:33:38.780670421Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:33:38.806091 containerd[1466]: time="2025-02-13T15:33:38.806001135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:33:40.444522 containerd[1466]: time="2025-02-13T15:33:40.444450663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:40.445232 containerd[1466]: time="2025-02-13T15:33:40.445166846Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 15:33:40.446340 containerd[1466]: time="2025-02-13T15:33:40.446307606Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:40.450736 containerd[1466]: time="2025-02-13T15:33:40.450682891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:40.451841 containerd[1466]: time="2025-02-13T15:33:40.451794746Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.645734189s" Feb 13 15:33:40.451841 containerd[1466]: time="2025-02-13T15:33:40.451828008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:33:40.481510 containerd[1466]: time="2025-02-13T15:33:40.481458165Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:33:41.444581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:33:41.453670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:41.593799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:41.598101 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:41.853019 kubelet[1991]: E0213 15:33:41.852812 1991 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:41.857602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:41.857811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:42.744572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4524969.mount: Deactivated successfully. Feb 13 15:33:42.992586 containerd[1466]: time="2025-02-13T15:33:42.992520955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:42.993372 containerd[1466]: time="2025-02-13T15:33:42.993301508Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 15:33:42.994473 containerd[1466]: time="2025-02-13T15:33:42.994447768Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:42.996381 containerd[1466]: time="2025-02-13T15:33:42.996306404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:42.996988 containerd[1466]: time="2025-02-13T15:33:42.996950221Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.51529188s" Feb 13 15:33:42.997036 containerd[1466]: time="2025-02-13T15:33:42.996991649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:33:43.022455 containerd[1466]: time="2025-02-13T15:33:43.022396973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:33:44.078026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629675594.mount: Deactivated successfully. Feb 13 15:33:45.643534 containerd[1466]: time="2025-02-13T15:33:45.643458669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:45.644276 containerd[1466]: time="2025-02-13T15:33:45.644201322Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:33:45.645544 containerd[1466]: time="2025-02-13T15:33:45.645510637Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:45.648313 containerd[1466]: time="2025-02-13T15:33:45.648271224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:45.649417 containerd[1466]: time="2025-02-13T15:33:45.649375165Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.626938728s" Feb 13 15:33:45.649417 containerd[1466]: time="2025-02-13T15:33:45.649413927Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:33:45.672771 containerd[1466]: time="2025-02-13T15:33:45.672714514Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:33:47.546441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181010413.mount: Deactivated successfully. Feb 13 15:33:47.556519 containerd[1466]: time="2025-02-13T15:33:47.556438078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:47.557431 containerd[1466]: time="2025-02-13T15:33:47.557373853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:33:47.558910 containerd[1466]: time="2025-02-13T15:33:47.558865440Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:47.561289 containerd[1466]: time="2025-02-13T15:33:47.561238140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:47.564439 containerd[1466]: time="2025-02-13T15:33:47.562990657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.889569999s" Feb 13 15:33:47.564439 containerd[1466]: time="2025-02-13T15:33:47.563023398Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:33:47.588347 containerd[1466]: time="2025-02-13T15:33:47.588310961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:33:48.432842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242790375.mount: Deactivated successfully. Feb 13 15:33:51.920585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:33:51.929679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:52.072637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:52.078231 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:52.138954 kubelet[2128]: E0213 15:33:52.138879 2128 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:52.143127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:52.143371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:53.681949 containerd[1466]: time="2025-02-13T15:33:53.681865678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:53.699813 containerd[1466]: time="2025-02-13T15:33:53.699731876Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 15:33:53.726501 containerd[1466]: time="2025-02-13T15:33:53.726410240Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:53.746506 containerd[1466]: time="2025-02-13T15:33:53.746398191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:53.747665 containerd[1466]: time="2025-02-13T15:33:53.747610623Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 6.15909045s" Feb 13 15:33:53.747665 containerd[1466]: time="2025-02-13T15:33:53.747660829Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:33:56.178335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:56.185681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:56.206294 systemd[1]: Reloading requested from client PID 2217 ('systemctl') (unit session-9.scope)... Feb 13 15:33:56.206311 systemd[1]: Reloading... Feb 13 15:33:56.308597 zram_generator::config[2256]: No configuration found. Feb 13 15:33:56.640255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:56.718165 systemd[1]: Reloading finished in 511 ms. Feb 13 15:33:56.766911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:56.770716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:56.772099 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:33:56.772411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:56.780905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:56.931781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:56.937552 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:33:56.981593 kubelet[2306]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:56.981593 kubelet[2306]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:33:56.981593 kubelet[2306]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:56.982015 kubelet[2306]: I0213 15:33:56.981627 2306 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:33:57.288485 kubelet[2306]: I0213 15:33:57.288344 2306 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:33:57.288485 kubelet[2306]: I0213 15:33:57.288373 2306 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:33:57.288626 kubelet[2306]: I0213 15:33:57.288594 2306 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:33:57.300940 kubelet[2306]: I0213 15:33:57.300883 2306 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:33:57.301251 kubelet[2306]: E0213 15:33:57.301229 2306 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.313300 kubelet[2306]: I0213 15:33:57.313252 2306 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:33:57.314521 kubelet[2306]: I0213 15:33:57.314453 2306 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:33:57.314731 kubelet[2306]: I0213 15:33:57.314511 2306 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:33:57.314830 kubelet[2306]: I0213 15:33:57.314749 2306 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:33:57.314830 kubelet[2306]: I0213 15:33:57.314762 2306 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:33:57.314943 kubelet[2306]: I0213 15:33:57.314922 2306 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:57.315656 kubelet[2306]: I0213 15:33:57.315629 2306 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:33:57.315656 kubelet[2306]: I0213 15:33:57.315651 2306 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:33:57.315730 kubelet[2306]: I0213 15:33:57.315682 2306 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:33:57.315730 kubelet[2306]: I0213 15:33:57.315700 2306 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:33:57.319359 kubelet[2306]: W0213 15:33:57.319303 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.319359 kubelet[2306]: E0213 15:33:57.319359 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.319487 kubelet[2306]: W0213 15:33:57.319303 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.319487 kubelet[2306]: E0213 15:33:57.319384 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.319918 kubelet[2306]: I0213 15:33:57.319898 2306 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:33:57.321102 kubelet[2306]: I0213 15:33:57.321080 2306 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:33:57.321159 kubelet[2306]: W0213 15:33:57.321137 2306 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:33:57.321756 kubelet[2306]: I0213 15:33:57.321737 2306 server.go:1264] "Started kubelet" Feb 13 15:33:57.324644 kubelet[2306]: I0213 15:33:57.324092 2306 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:33:57.324644 kubelet[2306]: I0213 15:33:57.324506 2306 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:33:57.324644 kubelet[2306]: I0213 15:33:57.324547 2306 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:33:57.325633 kubelet[2306]: I0213 15:33:57.324978 2306 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:33:57.325633 kubelet[2306]: I0213 15:33:57.325601 2306 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:33:57.326721 kubelet[2306]: I0213 15:33:57.326699 2306 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:33:57.326813 kubelet[2306]: E0213 15:33:57.326624 2306 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce6a5d4388e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:33:57.3217057 +0000 UTC m=+0.379894443,LastTimestamp:2025-02-13 15:33:57.3217057 +0000 UTC m=+0.379894443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:33:57.326922 kubelet[2306]: I0213 15:33:57.326807 2306 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:33:57.326981 kubelet[2306]: I0213 15:33:57.326840 2306 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:33:57.327093 kubelet[2306]: E0213 15:33:57.327061 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:33:57.327180 kubelet[2306]: W0213 15:33:57.327138 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.327217 kubelet[2306]: E0213 15:33:57.327194 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.327431 kubelet[2306]: E0213 15:33:57.327404 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Feb 13 15:33:57.327673 kubelet[2306]: I0213 15:33:57.327656 2306 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:33:57.327739 kubelet[2306]: I0213 15:33:57.327723 2306 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:33:57.328220 kubelet[2306]: E0213 15:33:57.328091 2306 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:33:57.328785 kubelet[2306]: I0213 15:33:57.328757 2306 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:33:57.346851 kubelet[2306]: I0213 15:33:57.345608 2306 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:33:57.346851 kubelet[2306]: I0213 15:33:57.345630 2306 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:33:57.346851 kubelet[2306]: I0213 15:33:57.345648 2306 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:57.347387 kubelet[2306]: I0213 15:33:57.347356 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:33:57.348779 kubelet[2306]: I0213 15:33:57.348740 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:33:57.348827 kubelet[2306]: I0213 15:33:57.348783 2306 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:33:57.348827 kubelet[2306]: I0213 15:33:57.348805 2306 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:33:57.348919 kubelet[2306]: E0213 15:33:57.348845 2306 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:33:57.349384 kubelet[2306]: W0213 15:33:57.349327 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.349430 kubelet[2306]: E0213 15:33:57.349397 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:57.429538 kubelet[2306]: I0213 15:33:57.429432 2306 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:57.429845 kubelet[2306]: E0213 15:33:57.429816 2306 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Feb 13 15:33:57.449533 kubelet[2306]: E0213 15:33:57.449506 2306 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:33:57.528174 kubelet[2306]: E0213 15:33:57.528118 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Feb 13 15:33:57.631561 kubelet[2306]: I0213 15:33:57.631461 2306 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:57.631816 kubelet[2306]: E0213 15:33:57.631783 2306 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Feb 13 15:33:57.649996 kubelet[2306]: E0213 15:33:57.649961 2306 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:33:57.929594 kubelet[2306]: E0213 15:33:57.929542 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Feb 13 15:33:58.033060 kubelet[2306]: I0213 15:33:58.033030 2306 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:58.033503 kubelet[2306]: E0213 15:33:58.033300 2306 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Feb 13 15:33:58.050518 kubelet[2306]: E0213 15:33:58.050492 2306 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:33:58.470554 kubelet[2306]: W0213 15:33:58.470461 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:58.470554 kubelet[2306]: E0213 15:33:58.470551 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:58.562290 kubelet[2306]: W0213 15:33:58.562235 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:58.562368 kubelet[2306]: E0213 15:33:58.562299 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:58.666194 kubelet[2306]: W0213 15:33:58.666102 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:58.666194 kubelet[2306]: E0213 15:33:58.666191 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:58.730702 kubelet[2306]: E0213 15:33:58.730601 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="1.6s" Feb 13 15:33:58.788133 kubelet[2306]: W0213 15:33:58.788073 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:58.788133 kubelet[2306]: E0213 15:33:58.788133 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:33:58.834590 kubelet[2306]: I0213 15:33:58.834558 2306 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:33:58.834841 kubelet[2306]: E0213 15:33:58.834809 2306 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Feb 13 15:33:58.851030 kubelet[2306]: E0213 15:33:58.850999 2306 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:33:59.161654 kubelet[2306]: I0213 15:33:59.161509 2306 policy_none.go:49] "None policy: Start" Feb 13 15:33:59.162390 kubelet[2306]: I0213 15:33:59.162355 2306 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:33:59.162390 kubelet[2306]: I0213 15:33:59.162384 2306 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:33:59.243910 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:33:59.254270 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:33:59.257085 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:33:59.264304 kubelet[2306]: I0213 15:33:59.264270 2306 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:33:59.264797 kubelet[2306]: I0213 15:33:59.264508 2306 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:33:59.264797 kubelet[2306]: I0213 15:33:59.264628 2306 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:33:59.265914 kubelet[2306]: E0213 15:33:59.265881 2306 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:33:59.436524 kubelet[2306]: E0213 15:33:59.436471 2306 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:00.193307 kubelet[2306]: W0213 15:34:00.193265 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:00.193307 kubelet[2306]: E0213 15:34:00.193304 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:00.330955 kubelet[2306]: E0213 15:34:00.330905 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="3.2s" Feb 13 15:34:00.436589 kubelet[2306]: I0213 15:34:00.436538 2306 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:34:00.436932 kubelet[2306]: E0213 15:34:00.436892 2306 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Feb 13 15:34:00.451370 kubelet[2306]: I0213 15:34:00.451231 2306 topology_manager.go:215] "Topology Admit Handler" podUID="d5d111a79ce52ab20a301650aae55ba8" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:34:00.452344 kubelet[2306]: I0213 15:34:00.452320 2306 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:34:00.452979 kubelet[2306]: I0213 15:34:00.452952 2306 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:34:00.458615 systemd[1]: Created slice kubepods-burstable-podd5d111a79ce52ab20a301650aae55ba8.slice - libcontainer container kubepods-burstable-podd5d111a79ce52ab20a301650aae55ba8.slice. Feb 13 15:34:00.477151 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:34:00.487271 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:34:00.539641 kubelet[2306]: W0213 15:34:00.539605 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:00.539641 kubelet[2306]: E0213 15:34:00.539642 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:00.544683 kubelet[2306]: I0213 15:34:00.544661 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:00.544753 kubelet[2306]: I0213 15:34:00.544686 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:00.544753 kubelet[2306]: I0213 15:34:00.544702 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:00.544753 kubelet[2306]: I0213 15:34:00.544715 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:00.544753 kubelet[2306]: I0213 15:34:00.544730 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:00.544753 kubelet[2306]: I0213 15:34:00.544745 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5d111a79ce52ab20a301650aae55ba8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5d111a79ce52ab20a301650aae55ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:34:00.544877 kubelet[2306]: I0213 15:34:00.544758 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5d111a79ce52ab20a301650aae55ba8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5d111a79ce52ab20a301650aae55ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:34:00.544877 kubelet[2306]: I0213 15:34:00.544783 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5d111a79ce52ab20a301650aae55ba8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d5d111a79ce52ab20a301650aae55ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:34:00.544877 kubelet[2306]: I0213 15:34:00.544800 2306 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:34:00.775874 kubelet[2306]: E0213 15:34:00.775703 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:00.776632 containerd[1466]: time="2025-02-13T15:34:00.776576381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d5d111a79ce52ab20a301650aae55ba8,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:00.785644 kubelet[2306]: E0213 15:34:00.785607 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:00.786004 containerd[1466]: time="2025-02-13T15:34:00.785966980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:00.789318 kubelet[2306]: E0213 15:34:00.789281 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:00.789785 containerd[1466]: time="2025-02-13T15:34:00.789747606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:01.062733 kubelet[2306]: W0213 15:34:01.062587 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:01.062733 kubelet[2306]: E0213 15:34:01.062635 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:01.436835 kubelet[2306]: W0213 15:34:01.436796 2306 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:01.436835 kubelet[2306]: E0213 15:34:01.436833 2306 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Feb 13 15:34:01.651500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248343900.mount: Deactivated successfully. Feb 13 15:34:01.657657 containerd[1466]: time="2025-02-13T15:34:01.657614721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:34:01.660308 containerd[1466]: time="2025-02-13T15:34:01.660275205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:34:01.661310 containerd[1466]: time="2025-02-13T15:34:01.661265325Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:34:01.663060 containerd[1466]: time="2025-02-13T15:34:01.663028523Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:34:01.665414 containerd[1466]: time="2025-02-13T15:34:01.665379688Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:34:01.666970 containerd[1466]: time="2025-02-13T15:34:01.666936014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:34:01.667934 containerd[1466]: time="2025-02-13T15:34:01.667903641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:34:01.669252 containerd[1466]: time="2025-02-13T15:34:01.669199172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:34:01.670295 containerd[1466]: time="2025-02-13T15:34:01.670268623Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 884.208747ms" Feb 13 15:34:01.671509 containerd[1466]: time="2025-02-13T15:34:01.671458212Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 894.745923ms" Feb 13 15:34:01.675696 containerd[1466]: time="2025-02-13T15:34:01.675666494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 885.842552ms" Feb 13 15:34:01.793060 containerd[1466]: time="2025-02-13T15:34:01.792810316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:01.793060 containerd[1466]: time="2025-02-13T15:34:01.792972034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:01.793060 containerd[1466]: time="2025-02-13T15:34:01.793045102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:01.794517 containerd[1466]: time="2025-02-13T15:34:01.794116467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:01.794517 containerd[1466]: time="2025-02-13T15:34:01.791959189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:01.794590 containerd[1466]: time="2025-02-13T15:34:01.794365039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:01.794590 containerd[1466]: time="2025-02-13T15:34:01.794383063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:01.794590 containerd[1466]: time="2025-02-13T15:34:01.794501007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:01.795503 containerd[1466]: time="2025-02-13T15:34:01.795395736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:01.795615 containerd[1466]: time="2025-02-13T15:34:01.795466761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:01.795615 containerd[1466]: time="2025-02-13T15:34:01.795550540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:01.795886 containerd[1466]: time="2025-02-13T15:34:01.795671009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:01.816678 systemd[1]: Started cri-containerd-526d151da0de68f137499e4c0a3fee69ecd36a1fd69dd9ba124c7e2b517bcc95.scope - libcontainer container 526d151da0de68f137499e4c0a3fee69ecd36a1fd69dd9ba124c7e2b517bcc95. Feb 13 15:34:01.820562 systemd[1]: Started cri-containerd-0a180d1603fbd7310e0a7e8e45b6277c3b2ee878fc19cb017cb69633925fbe45.scope - libcontainer container 0a180d1603fbd7310e0a7e8e45b6277c3b2ee878fc19cb017cb69633925fbe45. Feb 13 15:34:01.822139 systemd[1]: Started cri-containerd-8609407104cf8014ed9f7682e640dd065828d185188b873d1d01b8160619bd99.scope - libcontainer container 8609407104cf8014ed9f7682e640dd065828d185188b873d1d01b8160619bd99. Feb 13 15:34:01.857070 containerd[1466]: time="2025-02-13T15:34:01.856932743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d5d111a79ce52ab20a301650aae55ba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"526d151da0de68f137499e4c0a3fee69ecd36a1fd69dd9ba124c7e2b517bcc95\"" Feb 13 15:34:01.857359 containerd[1466]: time="2025-02-13T15:34:01.857042963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a180d1603fbd7310e0a7e8e45b6277c3b2ee878fc19cb017cb69633925fbe45\"" Feb 13 15:34:01.858459 kubelet[2306]: E0213 15:34:01.858420 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:01.858742 kubelet[2306]: E0213 15:34:01.858696 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:01.862583 containerd[1466]: time="2025-02-13T15:34:01.862545433Z" level=info msg="CreateContainer within sandbox \"0a180d1603fbd7310e0a7e8e45b6277c3b2ee878fc19cb017cb69633925fbe45\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:34:01.862820 containerd[1466]: time="2025-02-13T15:34:01.862794365Z" level=info msg="CreateContainer within sandbox \"526d151da0de68f137499e4c0a3fee69ecd36a1fd69dd9ba124c7e2b517bcc95\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:34:01.863089 containerd[1466]: time="2025-02-13T15:34:01.863012920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"8609407104cf8014ed9f7682e640dd065828d185188b873d1d01b8160619bd99\"" Feb 13 15:34:01.863806 kubelet[2306]: E0213 15:34:01.863787 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:01.865471 containerd[1466]: time="2025-02-13T15:34:01.865446302Z" level=info msg="CreateContainer within sandbox \"8609407104cf8014ed9f7682e640dd065828d185188b873d1d01b8160619bd99\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:34:01.998581 containerd[1466]: time="2025-02-13T15:34:01.998534694Z" level=info msg="CreateContainer within sandbox \"526d151da0de68f137499e4c0a3fee69ecd36a1fd69dd9ba124c7e2b517bcc95\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12f3a22546bdf78d54b5ab10cca5b81cefd6608ed31d7bb5ac6bd00842117ded\"" Feb 13 15:34:01.999288 containerd[1466]: time="2025-02-13T15:34:01.999123312Z" level=info msg="StartContainer for \"12f3a22546bdf78d54b5ab10cca5b81cefd6608ed31d7bb5ac6bd00842117ded\"" Feb 13 15:34:02.004502 containerd[1466]: time="2025-02-13T15:34:02.004452707Z" level=info msg="CreateContainer within sandbox \"0a180d1603fbd7310e0a7e8e45b6277c3b2ee878fc19cb017cb69633925fbe45\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a379ec781ef60fdbc82e521a33880f3ddecb447822e057083a473eb222debe5e\"" Feb 13 15:34:02.004902 containerd[1466]: time="2025-02-13T15:34:02.004862965Z" level=info msg="StartContainer for \"a379ec781ef60fdbc82e521a33880f3ddecb447822e057083a473eb222debe5e\"" Feb 13 15:34:02.008024 containerd[1466]: time="2025-02-13T15:34:02.007993326Z" level=info msg="CreateContainer within sandbox \"8609407104cf8014ed9f7682e640dd065828d185188b873d1d01b8160619bd99\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"708453bb065970a2591fa19887ae16d5e10ce9937728f221e5e0d6d544ae0d92\"" Feb 13 15:34:02.008850 containerd[1466]: time="2025-02-13T15:34:02.008674368Z" level=info msg="StartContainer for \"708453bb065970a2591fa19887ae16d5e10ce9937728f221e5e0d6d544ae0d92\"" Feb 13 15:34:02.028751 systemd[1]: Started cri-containerd-12f3a22546bdf78d54b5ab10cca5b81cefd6608ed31d7bb5ac6bd00842117ded.scope - libcontainer container 12f3a22546bdf78d54b5ab10cca5b81cefd6608ed31d7bb5ac6bd00842117ded. Feb 13 15:34:02.041645 systemd[1]: Started cri-containerd-708453bb065970a2591fa19887ae16d5e10ce9937728f221e5e0d6d544ae0d92.scope - libcontainer container 708453bb065970a2591fa19887ae16d5e10ce9937728f221e5e0d6d544ae0d92. Feb 13 15:34:02.043520 systemd[1]: Started cri-containerd-a379ec781ef60fdbc82e521a33880f3ddecb447822e057083a473eb222debe5e.scope - libcontainer container a379ec781ef60fdbc82e521a33880f3ddecb447822e057083a473eb222debe5e. Feb 13 15:34:02.086906 containerd[1466]: time="2025-02-13T15:34:02.086854346Z" level=info msg="StartContainer for \"12f3a22546bdf78d54b5ab10cca5b81cefd6608ed31d7bb5ac6bd00842117ded\" returns successfully" Feb 13 15:34:02.092136 containerd[1466]: time="2025-02-13T15:34:02.092086385Z" level=info msg="StartContainer for \"708453bb065970a2591fa19887ae16d5e10ce9937728f221e5e0d6d544ae0d92\" returns successfully" Feb 13 15:34:02.098354 containerd[1466]: time="2025-02-13T15:34:02.098310366Z" level=info msg="StartContainer for \"a379ec781ef60fdbc82e521a33880f3ddecb447822e057083a473eb222debe5e\" returns successfully" Feb 13 15:34:02.366879 kubelet[2306]: E0213 15:34:02.366386 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:02.368411 kubelet[2306]: E0213 15:34:02.368134 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:02.371467 kubelet[2306]: E0213 15:34:02.371445 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:03.292193 update_engine[1450]: I20250213 15:34:03.292100 1450 update_attempter.cc:509] Updating boot flags... Feb 13 15:34:03.371555 kubelet[2306]: E0213 15:34:03.371519 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:03.638517 kubelet[2306]: I0213 15:34:03.638379 2306 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:34:03.700551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2587) Feb 13 15:34:03.734753 kubelet[2306]: I0213 15:34:03.734706 2306 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:34:03.748502 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2587) Feb 13 15:34:03.751900 kubelet[2306]: E0213 15:34:03.751826 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:03.787602 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2587) Feb 13 15:34:03.852874 kubelet[2306]: E0213 15:34:03.852830 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:03.953363 kubelet[2306]: E0213 15:34:03.953330 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.053796 kubelet[2306]: E0213 15:34:04.053771 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.154380 kubelet[2306]: E0213 15:34:04.154322 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.255075 kubelet[2306]: E0213 15:34:04.254949 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.355293 kubelet[2306]: E0213 15:34:04.355243 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.456144 kubelet[2306]: E0213 15:34:04.456101 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.556805 kubelet[2306]: E0213 15:34:04.556651 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.657405 kubelet[2306]: E0213 15:34:04.657358 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.757986 kubelet[2306]: E0213 15:34:04.757919 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.859060 kubelet[2306]: E0213 15:34:04.858943 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:04.959447 kubelet[2306]: E0213 15:34:04.959407 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.060093 kubelet[2306]: E0213 15:34:05.060047 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.160252 kubelet[2306]: E0213 15:34:05.160143 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.260653 kubelet[2306]: E0213 15:34:05.260619 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.361089 kubelet[2306]: E0213 15:34:05.361039 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.461496 kubelet[2306]: E0213 15:34:05.461440 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.474627 systemd[1]: Reloading requested from client PID 2597 ('systemctl') (unit session-9.scope)... Feb 13 15:34:05.474641 systemd[1]: Reloading... Feb 13 15:34:05.540504 zram_generator::config[2636]: No configuration found. Feb 13 15:34:05.561785 kubelet[2306]: E0213 15:34:05.561724 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.648079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:34:05.662240 kubelet[2306]: E0213 15:34:05.662210 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.735793 systemd[1]: Reloading finished in 260 ms. Feb 13 15:34:05.762743 kubelet[2306]: E0213 15:34:05.762700 2306 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:05.775962 kubelet[2306]: I0213 15:34:05.775888 2306 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:34:05.776007 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:34:05.800943 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:34:05.801272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:34:05.810688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:34:05.953811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:34:05.958340 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:34:05.994121 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:34:05.994121 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:34:05.994121 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:34:05.994544 kubelet[2681]: I0213 15:34:05.994107 2681 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:34:05.998532 kubelet[2681]: I0213 15:34:05.998511 2681 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:34:05.998532 kubelet[2681]: I0213 15:34:05.998529 2681 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:34:05.998691 kubelet[2681]: I0213 15:34:05.998671 2681 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:34:05.999729 kubelet[2681]: I0213 15:34:05.999711 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:34:06.000654 kubelet[2681]: I0213 15:34:06.000629 2681 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:34:06.008198 kubelet[2681]: I0213 15:34:06.008164 2681 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:34:06.008427 kubelet[2681]: I0213 15:34:06.008389 2681 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:34:06.008588 kubelet[2681]: I0213 15:34:06.008415 2681 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:34:06.008690 kubelet[2681]: I0213 15:34:06.008591 2681 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:34:06.008690 kubelet[2681]: I0213 15:34:06.008600 2681 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:34:06.008690 kubelet[2681]: I0213 15:34:06.008643 2681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:34:06.008775 kubelet[2681]: I0213 15:34:06.008735 2681 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:34:06.008775 kubelet[2681]: I0213 15:34:06.008761 2681 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:34:06.008822 kubelet[2681]: I0213 15:34:06.008781 2681 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:34:06.008822 kubelet[2681]: I0213 15:34:06.008798 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:34:06.009337 kubelet[2681]: I0213 15:34:06.009312 2681 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:34:06.011047 kubelet[2681]: I0213 15:34:06.009457 2681 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:34:06.011047 kubelet[2681]: I0213 15:34:06.009866 2681 server.go:1264] "Started kubelet" Feb 13 15:34:06.011047 kubelet[2681]: I0213 15:34:06.010021 2681 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:34:06.011047 kubelet[2681]: I0213 15:34:06.010046 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:34:06.011047 kubelet[2681]: I0213 15:34:06.010270 2681 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:34:06.011325 kubelet[2681]: I0213 15:34:06.011296 2681 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:34:06.014673 kubelet[2681]: I0213 15:34:06.014642 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:34:06.018934 kubelet[2681]: E0213 15:34:06.018388 2681 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:34:06.018934 kubelet[2681]: I0213 15:34:06.018428 2681 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:34:06.018934 kubelet[2681]: I0213 15:34:06.018536 2681 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:34:06.018934 kubelet[2681]: I0213 15:34:06.018665 2681 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:34:06.024472 kubelet[2681]: I0213 15:34:06.024447 2681 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:34:06.024613 kubelet[2681]: I0213 15:34:06.024587 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:34:06.026423 kubelet[2681]: I0213 15:34:06.026397 2681 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:34:06.026963 kubelet[2681]: E0213 15:34:06.026946 2681 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:34:06.028510 kubelet[2681]: I0213 15:34:06.028409 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:34:06.029806 kubelet[2681]: I0213 15:34:06.029777 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:34:06.029862 kubelet[2681]: I0213 15:34:06.029810 2681 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:34:06.029862 kubelet[2681]: I0213 15:34:06.029831 2681 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:34:06.029981 kubelet[2681]: E0213 15:34:06.029871 2681 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:34:06.057531 kubelet[2681]: I0213 15:34:06.057493 2681 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:34:06.057531 kubelet[2681]: I0213 15:34:06.057509 2681 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:34:06.057531 kubelet[2681]: I0213 15:34:06.057527 2681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:34:06.057738 kubelet[2681]: I0213 15:34:06.057680 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:34:06.057738 kubelet[2681]: I0213 15:34:06.057694 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:34:06.057738 kubelet[2681]: I0213 15:34:06.057725 2681 policy_none.go:49] "None policy: Start" Feb 13 15:34:06.058279 kubelet[2681]: I0213 15:34:06.058261 2681 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:34:06.058341 kubelet[2681]: I0213 15:34:06.058282 2681 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:34:06.058426 kubelet[2681]: I0213 15:34:06.058406 2681 state_mem.go:75] "Updated machine memory state" Feb 13 15:34:06.062296 kubelet[2681]: I0213 15:34:06.062262 2681 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:34:06.062494 kubelet[2681]: I0213 15:34:06.062447 2681 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:34:06.062684 kubelet[2681]: I0213 15:34:06.062668 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:34:06.124441 kubelet[2681]: I0213 15:34:06.122315 2681 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:34:06.130099 kubelet[2681]: I0213 15:34:06.130043 2681 topology_manager.go:215] "Topology Admit Handler" podUID="d5d111a79ce52ab20a301650aae55ba8" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:34:06.130242 kubelet[2681]: I0213 15:34:06.130148 2681 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:34:06.130242 kubelet[2681]: I0213 15:34:06.130205 2681 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:34:06.235492 kubelet[2681]: I0213 15:34:06.235395 2681 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:34:06.235492 kubelet[2681]: I0213 15:34:06.235493 2681 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:34:06.320397 kubelet[2681]: I0213 15:34:06.320260 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5d111a79ce52ab20a301650aae55ba8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5d111a79ce52ab20a301650aae55ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:34:06.320397 kubelet[2681]: I0213 15:34:06.320303 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:06.320397 kubelet[2681]: I0213 15:34:06.320325 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:06.320397 kubelet[2681]: I0213 15:34:06.320344 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:34:06.320397 kubelet[2681]: I0213 15:34:06.320361 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:06.320667 kubelet[2681]: I0213 15:34:06.320388 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5d111a79ce52ab20a301650aae55ba8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5d111a79ce52ab20a301650aae55ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:34:06.320667 kubelet[2681]: I0213 15:34:06.320405 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5d111a79ce52ab20a301650aae55ba8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d5d111a79ce52ab20a301650aae55ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:34:06.320667 kubelet[2681]: I0213 15:34:06.320422 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:06.320667 kubelet[2681]: I0213 15:34:06.320436 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:34:06.537212 kubelet[2681]: E0213 15:34:06.537153 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:06.537717 kubelet[2681]: E0213 15:34:06.537558 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:06.537717 kubelet[2681]: E0213 15:34:06.537631 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:07.009469 kubelet[2681]: I0213 15:34:07.009421 2681 apiserver.go:52] "Watching apiserver" Feb 13 15:34:07.019675 kubelet[2681]: I0213 15:34:07.019657 2681 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:34:07.043577 kubelet[2681]: E0213 15:34:07.043546 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:07.043652 kubelet[2681]: E0213 15:34:07.043550 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:07.192100 kubelet[2681]: E0213 15:34:07.191594 2681 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:34:07.192100 kubelet[2681]: E0213 15:34:07.192033 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:07.519672 kubelet[2681]: I0213 15:34:07.519602 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.519587033 podStartE2EDuration="1.519587033s" podCreationTimestamp="2025-02-13 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:07.519387084 +0000 UTC m=+1.557319960" watchObservedRunningTime="2025-02-13 15:34:07.519587033 +0000 UTC m=+1.557519899" Feb 13 15:34:07.553459 kubelet[2681]: I0213 15:34:07.553392 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.553376896 podStartE2EDuration="1.553376896s" podCreationTimestamp="2025-02-13 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:07.54130351 +0000 UTC m=+1.579236396" watchObservedRunningTime="2025-02-13 15:34:07.553376896 +0000 UTC m=+1.591309762" Feb 13 15:34:07.553650 kubelet[2681]: I0213 15:34:07.553465 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5534611059999999 podStartE2EDuration="1.553461106s" podCreationTimestamp="2025-02-13 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:07.553099532 +0000 UTC m=+1.591032398" watchObservedRunningTime="2025-02-13 15:34:07.553461106 +0000 UTC m=+1.591393972" Feb 13 15:34:08.046087 kubelet[2681]: E0213 15:34:08.045412 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:09.047974 kubelet[2681]: E0213 15:34:09.047925 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:09.996678 kubelet[2681]: E0213 15:34:09.996638 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:11.015649 sudo[1659]: pam_unix(sudo:session): session closed for user root Feb 13 15:34:11.017126 sshd[1658]: Connection closed by 10.0.0.1 port 41774 Feb 13 15:34:11.017535 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:11.021362 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:41774.service: Deactivated successfully. Feb 13 15:34:11.023049 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:34:11.023224 systemd[1]: session-9.scope: Consumed 5.236s CPU time, 192.4M memory peak, 0B memory swap peak. Feb 13 15:34:11.023616 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:34:11.024384 systemd-logind[1449]: Removed session 9. Feb 13 15:34:16.300852 kubelet[2681]: E0213 15:34:16.300800 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:16.465835 kubelet[2681]: E0213 15:34:16.465752 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:17.057373 kubelet[2681]: E0213 15:34:17.056946 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:17.057373 kubelet[2681]: E0213 15:34:17.057295 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:18.058023 kubelet[2681]: E0213 15:34:18.057980 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:20.000923 kubelet[2681]: E0213 15:34:20.000885 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:20.777717 kubelet[2681]: I0213 15:34:20.777666 2681 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:34:20.778070 containerd[1466]: time="2025-02-13T15:34:20.778024133Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:34:20.778471 kubelet[2681]: I0213 15:34:20.778208 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:34:22.072513 kubelet[2681]: I0213 15:34:22.071637 2681 topology_manager.go:215] "Topology Admit Handler" podUID="f1e465d7-fcbc-4009-979d-1ea21c9ee320" podNamespace="kube-system" podName="kube-proxy-7762w" Feb 13 15:34:22.080046 systemd[1]: Created slice kubepods-besteffort-podf1e465d7_fcbc_4009_979d_1ea21c9ee320.slice - libcontainer container kubepods-besteffort-podf1e465d7_fcbc_4009_979d_1ea21c9ee320.slice. Feb 13 15:34:22.117296 kubelet[2681]: I0213 15:34:22.117249 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1e465d7-fcbc-4009-979d-1ea21c9ee320-lib-modules\") pod \"kube-proxy-7762w\" (UID: \"f1e465d7-fcbc-4009-979d-1ea21c9ee320\") " pod="kube-system/kube-proxy-7762w" Feb 13 15:34:22.117296 kubelet[2681]: I0213 15:34:22.117286 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1e465d7-fcbc-4009-979d-1ea21c9ee320-xtables-lock\") pod \"kube-proxy-7762w\" (UID: \"f1e465d7-fcbc-4009-979d-1ea21c9ee320\") " pod="kube-system/kube-proxy-7762w" Feb 13 15:34:22.117296 kubelet[2681]: I0213 15:34:22.117308 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w47dn\" (UniqueName: \"kubernetes.io/projected/f1e465d7-fcbc-4009-979d-1ea21c9ee320-kube-api-access-w47dn\") pod \"kube-proxy-7762w\" (UID: \"f1e465d7-fcbc-4009-979d-1ea21c9ee320\") " pod="kube-system/kube-proxy-7762w" Feb 13 15:34:22.117554 kubelet[2681]: I0213 15:34:22.117327 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1e465d7-fcbc-4009-979d-1ea21c9ee320-kube-proxy\") pod \"kube-proxy-7762w\" (UID: \"f1e465d7-fcbc-4009-979d-1ea21c9ee320\") " pod="kube-system/kube-proxy-7762w" Feb 13 15:34:22.423711 kubelet[2681]: E0213 15:34:22.423439 2681 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:34:22.423711 kubelet[2681]: E0213 15:34:22.423497 2681 projected.go:200] Error preparing data for projected volume kube-api-access-w47dn for pod kube-system/kube-proxy-7762w: configmap "kube-root-ca.crt" not found Feb 13 15:34:22.423711 kubelet[2681]: E0213 15:34:22.423576 2681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1e465d7-fcbc-4009-979d-1ea21c9ee320-kube-api-access-w47dn podName:f1e465d7-fcbc-4009-979d-1ea21c9ee320 nodeName:}" failed. No retries permitted until 2025-02-13 15:34:22.923555858 +0000 UTC m=+16.961488724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w47dn" (UniqueName: "kubernetes.io/projected/f1e465d7-fcbc-4009-979d-1ea21c9ee320-kube-api-access-w47dn") pod "kube-proxy-7762w" (UID: "f1e465d7-fcbc-4009-979d-1ea21c9ee320") : configmap "kube-root-ca.crt" not found Feb 13 15:34:22.474429 kubelet[2681]: I0213 15:34:22.474241 2681 topology_manager.go:215] "Topology Admit Handler" podUID="7bb80a49-9f3d-48d5-b9b6-64192c10ca44" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-np7t9" Feb 13 15:34:22.481631 systemd[1]: Created slice kubepods-besteffort-pod7bb80a49_9f3d_48d5_b9b6_64192c10ca44.slice - libcontainer container kubepods-besteffort-pod7bb80a49_9f3d_48d5_b9b6_64192c10ca44.slice. Feb 13 15:34:22.521316 kubelet[2681]: I0213 15:34:22.521249 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fggsb\" (UniqueName: \"kubernetes.io/projected/7bb80a49-9f3d-48d5-b9b6-64192c10ca44-kube-api-access-fggsb\") pod \"tigera-operator-7bc55997bb-np7t9\" (UID: \"7bb80a49-9f3d-48d5-b9b6-64192c10ca44\") " pod="tigera-operator/tigera-operator-7bc55997bb-np7t9" Feb 13 15:34:22.521316 kubelet[2681]: I0213 15:34:22.521300 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bb80a49-9f3d-48d5-b9b6-64192c10ca44-var-lib-calico\") pod \"tigera-operator-7bc55997bb-np7t9\" (UID: \"7bb80a49-9f3d-48d5-b9b6-64192c10ca44\") " pod="tigera-operator/tigera-operator-7bc55997bb-np7t9" Feb 13 15:34:22.785138 containerd[1466]: time="2025-02-13T15:34:22.785004219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-np7t9,Uid:7bb80a49-9f3d-48d5-b9b6-64192c10ca44,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:34:22.989933 kubelet[2681]: E0213 15:34:22.989900 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:22.990366 containerd[1466]: time="2025-02-13T15:34:22.990326199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7762w,Uid:f1e465d7-fcbc-4009-979d-1ea21c9ee320,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:23.888861 containerd[1466]: time="2025-02-13T15:34:23.888759722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:23.888861 containerd[1466]: time="2025-02-13T15:34:23.888835364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:23.888861 containerd[1466]: time="2025-02-13T15:34:23.888851765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:23.889528 containerd[1466]: time="2025-02-13T15:34:23.888968023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:23.891337 containerd[1466]: time="2025-02-13T15:34:23.891226512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:23.891337 containerd[1466]: time="2025-02-13T15:34:23.891285463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:23.891337 containerd[1466]: time="2025-02-13T15:34:23.891296634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:23.891734 containerd[1466]: time="2025-02-13T15:34:23.891630432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:23.918714 systemd[1]: Started cri-containerd-57749a6329c8822ba12e32cb81edd528499d9d8760247af97bfa906024d5fe2d.scope - libcontainer container 57749a6329c8822ba12e32cb81edd528499d9d8760247af97bfa906024d5fe2d. Feb 13 15:34:23.921179 systemd[1]: Started cri-containerd-d5c305a9f8f57e0687e0791461029d4932fc7776b7788fa97aef85e6a3a01d8f.scope - libcontainer container d5c305a9f8f57e0687e0791461029d4932fc7776b7788fa97aef85e6a3a01d8f. Feb 13 15:34:23.946936 containerd[1466]: time="2025-02-13T15:34:23.946887346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7762w,Uid:f1e465d7-fcbc-4009-979d-1ea21c9ee320,Namespace:kube-system,Attempt:0,} returns sandbox id \"57749a6329c8822ba12e32cb81edd528499d9d8760247af97bfa906024d5fe2d\"" Feb 13 15:34:23.947928 kubelet[2681]: E0213 15:34:23.947900 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:23.952205 containerd[1466]: time="2025-02-13T15:34:23.951927497Z" level=info msg="CreateContainer within sandbox \"57749a6329c8822ba12e32cb81edd528499d9d8760247af97bfa906024d5fe2d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:34:23.970508 containerd[1466]: time="2025-02-13T15:34:23.970408762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-np7t9,Uid:7bb80a49-9f3d-48d5-b9b6-64192c10ca44,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d5c305a9f8f57e0687e0791461029d4932fc7776b7788fa97aef85e6a3a01d8f\"" Feb 13 15:34:23.973352 containerd[1466]: time="2025-02-13T15:34:23.973315690Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:34:23.978375 containerd[1466]: time="2025-02-13T15:34:23.978322910Z" level=info msg="CreateContainer within sandbox \"57749a6329c8822ba12e32cb81edd528499d9d8760247af97bfa906024d5fe2d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"475e3fa70cc2faf6ac167ef9a8c3251e9442210c5d5d54d7bb6d080b6d2dcc04\"" Feb 13 15:34:23.978994 containerd[1466]: time="2025-02-13T15:34:23.978952154Z" level=info msg="StartContainer for \"475e3fa70cc2faf6ac167ef9a8c3251e9442210c5d5d54d7bb6d080b6d2dcc04\"" Feb 13 15:34:24.009664 systemd[1]: Started cri-containerd-475e3fa70cc2faf6ac167ef9a8c3251e9442210c5d5d54d7bb6d080b6d2dcc04.scope - libcontainer container 475e3fa70cc2faf6ac167ef9a8c3251e9442210c5d5d54d7bb6d080b6d2dcc04. Feb 13 15:34:24.045819 containerd[1466]: time="2025-02-13T15:34:24.045776671Z" level=info msg="StartContainer for \"475e3fa70cc2faf6ac167ef9a8c3251e9442210c5d5d54d7bb6d080b6d2dcc04\" returns successfully" Feb 13 15:34:24.067867 kubelet[2681]: E0213 15:34:24.067819 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:26.656273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371406176.mount: Deactivated successfully. Feb 13 15:34:27.665347 containerd[1466]: time="2025-02-13T15:34:27.665288231Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:27.675443 containerd[1466]: time="2025-02-13T15:34:27.675361127Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 15:34:27.676431 containerd[1466]: time="2025-02-13T15:34:27.676387265Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:27.678838 containerd[1466]: time="2025-02-13T15:34:27.678782598Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:27.679617 containerd[1466]: time="2025-02-13T15:34:27.679578374Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.706224933s" Feb 13 15:34:27.679660 containerd[1466]: time="2025-02-13T15:34:27.679617227Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 15:34:27.684213 containerd[1466]: time="2025-02-13T15:34:27.684163132Z" level=info msg="CreateContainer within sandbox \"d5c305a9f8f57e0687e0791461029d4932fc7776b7788fa97aef85e6a3a01d8f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:34:27.700255 containerd[1466]: time="2025-02-13T15:34:27.700192003Z" level=info msg="CreateContainer within sandbox \"d5c305a9f8f57e0687e0791461029d4932fc7776b7788fa97aef85e6a3a01d8f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6a4fd0ebd831f07a38a80228ff4e31a6bfd519784ce5cf9c14b445fb4cddcf70\"" Feb 13 15:34:27.700903 containerd[1466]: time="2025-02-13T15:34:27.700833098Z" level=info msg="StartContainer for \"6a4fd0ebd831f07a38a80228ff4e31a6bfd519784ce5cf9c14b445fb4cddcf70\"" Feb 13 15:34:27.735687 systemd[1]: Started cri-containerd-6a4fd0ebd831f07a38a80228ff4e31a6bfd519784ce5cf9c14b445fb4cddcf70.scope - libcontainer container 6a4fd0ebd831f07a38a80228ff4e31a6bfd519784ce5cf9c14b445fb4cddcf70. Feb 13 15:34:27.862089 containerd[1466]: time="2025-02-13T15:34:27.862030587Z" level=info msg="StartContainer for \"6a4fd0ebd831f07a38a80228ff4e31a6bfd519784ce5cf9c14b445fb4cddcf70\" returns successfully" Feb 13 15:34:28.119200 kubelet[2681]: I0213 15:34:28.119119 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7762w" podStartSLOduration=7.119098881 podStartE2EDuration="7.119098881s" podCreationTimestamp="2025-02-13 15:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:24.078783719 +0000 UTC m=+18.116716596" watchObservedRunningTime="2025-02-13 15:34:28.119098881 +0000 UTC m=+22.157031737" Feb 13 15:34:31.001348 kubelet[2681]: I0213 15:34:31.001268 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-np7t9" podStartSLOduration=5.29042208 podStartE2EDuration="9.001245598s" podCreationTimestamp="2025-02-13 15:34:22 +0000 UTC" firstStartedPulling="2025-02-13 15:34:23.971854262 +0000 UTC m=+18.009787128" lastFinishedPulling="2025-02-13 15:34:27.68267778 +0000 UTC m=+21.720610646" observedRunningTime="2025-02-13 15:34:28.119282506 +0000 UTC m=+22.157215372" watchObservedRunningTime="2025-02-13 15:34:31.001245598 +0000 UTC m=+25.039178474" Feb 13 15:34:31.010370 kubelet[2681]: I0213 15:34:31.010313 2681 topology_manager.go:215] "Topology Admit Handler" podUID="29ecdef8-370a-401d-997d-912b36838f29" podNamespace="calico-system" podName="calico-typha-547b6f6d8d-29mrg" Feb 13 15:34:31.022069 systemd[1]: Created slice kubepods-besteffort-pod29ecdef8_370a_401d_997d_912b36838f29.slice - libcontainer container kubepods-besteffort-pod29ecdef8_370a_401d_997d_912b36838f29.slice. Feb 13 15:34:31.044258 kubelet[2681]: I0213 15:34:31.044202 2681 topology_manager.go:215] "Topology Admit Handler" podUID="84b0920c-d3a2-4f13-9358-73cb1e7b9d22" podNamespace="calico-system" podName="calico-node-5jh6l" Feb 13 15:34:31.054409 systemd[1]: Created slice kubepods-besteffort-pod84b0920c_d3a2_4f13_9358_73cb1e7b9d22.slice - libcontainer container kubepods-besteffort-pod84b0920c_d3a2_4f13_9358_73cb1e7b9d22.slice. Feb 13 15:34:31.077808 kubelet[2681]: I0213 15:34:31.077751 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-tigera-ca-bundle\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.077808 kubelet[2681]: I0213 15:34:31.077812 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-var-run-calico\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.077965 kubelet[2681]: I0213 15:34:31.077837 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9pq\" (UniqueName: \"kubernetes.io/projected/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-kube-api-access-wp9pq\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.077965 kubelet[2681]: I0213 15:34:31.077880 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/29ecdef8-370a-401d-997d-912b36838f29-typha-certs\") pod \"calico-typha-547b6f6d8d-29mrg\" (UID: \"29ecdef8-370a-401d-997d-912b36838f29\") " pod="calico-system/calico-typha-547b6f6d8d-29mrg" Feb 13 15:34:31.077965 kubelet[2681]: I0213 15:34:31.077903 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-xtables-lock\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.077965 kubelet[2681]: I0213 15:34:31.077924 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2bp2\" (UniqueName: \"kubernetes.io/projected/29ecdef8-370a-401d-997d-912b36838f29-kube-api-access-v2bp2\") pod \"calico-typha-547b6f6d8d-29mrg\" (UID: \"29ecdef8-370a-401d-997d-912b36838f29\") " pod="calico-system/calico-typha-547b6f6d8d-29mrg" Feb 13 15:34:31.077965 kubelet[2681]: I0213 15:34:31.077943 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-node-certs\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.078082 kubelet[2681]: I0213 15:34:31.077963 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-lib-modules\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.078082 kubelet[2681]: I0213 15:34:31.077983 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-cni-log-dir\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.078082 kubelet[2681]: I0213 15:34:31.078005 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-cni-net-dir\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.078082 kubelet[2681]: I0213 15:34:31.078026 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29ecdef8-370a-401d-997d-912b36838f29-tigera-ca-bundle\") pod \"calico-typha-547b6f6d8d-29mrg\" (UID: \"29ecdef8-370a-401d-997d-912b36838f29\") " pod="calico-system/calico-typha-547b6f6d8d-29mrg" Feb 13 15:34:31.078082 kubelet[2681]: I0213 15:34:31.078045 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-cni-bin-dir\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.078249 kubelet[2681]: I0213 15:34:31.078076 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-policysync\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.078249 kubelet[2681]: I0213 15:34:31.078100 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-var-lib-calico\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.078249 kubelet[2681]: I0213 15:34:31.078118 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/84b0920c-d3a2-4f13-9358-73cb1e7b9d22-flexvol-driver-host\") pod \"calico-node-5jh6l\" (UID: \"84b0920c-d3a2-4f13-9358-73cb1e7b9d22\") " pod="calico-system/calico-node-5jh6l" Feb 13 15:34:31.154270 kubelet[2681]: I0213 15:34:31.154209 2681 topology_manager.go:215] "Topology Admit Handler" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" podNamespace="calico-system" podName="csi-node-driver-kxsjw" Feb 13 15:34:31.155506 kubelet[2681]: E0213 15:34:31.154532 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:31.180510 kubelet[2681]: I0213 15:34:31.178528 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c-varrun\") pod \"csi-node-driver-kxsjw\" (UID: \"0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c\") " pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:31.180510 kubelet[2681]: I0213 15:34:31.178582 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c-registration-dir\") pod \"csi-node-driver-kxsjw\" (UID: \"0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c\") " pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:31.180510 kubelet[2681]: I0213 15:34:31.178617 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c-socket-dir\") pod \"csi-node-driver-kxsjw\" (UID: \"0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c\") " pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:31.180510 kubelet[2681]: I0213 15:34:31.178638 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4nsz\" (UniqueName: \"kubernetes.io/projected/0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c-kube-api-access-c4nsz\") pod \"csi-node-driver-kxsjw\" (UID: \"0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c\") " pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:31.180510 kubelet[2681]: I0213 15:34:31.178677 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c-kubelet-dir\") pod \"csi-node-driver-kxsjw\" (UID: \"0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c\") " pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:31.183063 kubelet[2681]: E0213 15:34:31.183025 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.183063 kubelet[2681]: W0213 15:34:31.183056 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.183180 kubelet[2681]: E0213 15:34:31.183081 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.183860 kubelet[2681]: E0213 15:34:31.183834 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.183860 kubelet[2681]: W0213 15:34:31.183854 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.183927 kubelet[2681]: E0213 15:34:31.183868 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.185529 kubelet[2681]: E0213 15:34:31.184395 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.185529 kubelet[2681]: W0213 15:34:31.184414 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.185529 kubelet[2681]: E0213 15:34:31.184427 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.185529 kubelet[2681]: E0213 15:34:31.185160 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.185529 kubelet[2681]: W0213 15:34:31.185172 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.185529 kubelet[2681]: E0213 15:34:31.185287 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.188465 kubelet[2681]: E0213 15:34:31.188373 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.188465 kubelet[2681]: W0213 15:34:31.188397 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.193565 kubelet[2681]: E0213 15:34:31.193504 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.194577 kubelet[2681]: E0213 15:34:31.194527 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.194577 kubelet[2681]: W0213 15:34:31.194556 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.194708 kubelet[2681]: E0213 15:34:31.194580 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.202557 kubelet[2681]: E0213 15:34:31.202407 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.202557 kubelet[2681]: W0213 15:34:31.202430 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.202557 kubelet[2681]: E0213 15:34:31.202450 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.204253 kubelet[2681]: E0213 15:34:31.204173 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.204253 kubelet[2681]: W0213 15:34:31.204193 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.204253 kubelet[2681]: E0213 15:34:31.204214 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.280419 kubelet[2681]: E0213 15:34:31.280259 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.280419 kubelet[2681]: W0213 15:34:31.280303 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.280419 kubelet[2681]: E0213 15:34:31.280331 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.280933 kubelet[2681]: E0213 15:34:31.280905 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.280933 kubelet[2681]: W0213 15:34:31.280925 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.281029 kubelet[2681]: E0213 15:34:31.280949 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.281708 kubelet[2681]: E0213 15:34:31.281494 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.281708 kubelet[2681]: W0213 15:34:31.281523 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.281708 kubelet[2681]: E0213 15:34:31.281560 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.281997 kubelet[2681]: E0213 15:34:31.281979 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.281997 kubelet[2681]: W0213 15:34:31.281993 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.282079 kubelet[2681]: E0213 15:34:31.282039 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.282845 kubelet[2681]: E0213 15:34:31.282800 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.282845 kubelet[2681]: W0213 15:34:31.282824 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.283177 kubelet[2681]: E0213 15:34:31.282880 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.283177 kubelet[2681]: E0213 15:34:31.283105 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.283177 kubelet[2681]: W0213 15:34:31.283113 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.283177 kubelet[2681]: E0213 15:34:31.283157 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.283967 kubelet[2681]: E0213 15:34:31.283415 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.283967 kubelet[2681]: W0213 15:34:31.283459 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.283967 kubelet[2681]: E0213 15:34:31.283532 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.283967 kubelet[2681]: E0213 15:34:31.283664 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.283967 kubelet[2681]: W0213 15:34:31.283672 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.283967 kubelet[2681]: E0213 15:34:31.283719 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.284123 kubelet[2681]: E0213 15:34:31.284000 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.284123 kubelet[2681]: W0213 15:34:31.284009 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.284168 kubelet[2681]: E0213 15:34:31.284140 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.284383 kubelet[2681]: E0213 15:34:31.284242 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.284383 kubelet[2681]: W0213 15:34:31.284258 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.284383 kubelet[2681]: E0213 15:34:31.284318 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.284548 kubelet[2681]: E0213 15:34:31.284531 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.284572 kubelet[2681]: W0213 15:34:31.284545 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.284643 kubelet[2681]: E0213 15:34:31.284625 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.284812 kubelet[2681]: E0213 15:34:31.284771 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.284812 kubelet[2681]: W0213 15:34:31.284782 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.284812 kubelet[2681]: E0213 15:34:31.284809 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.285039 kubelet[2681]: E0213 15:34:31.285025 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.285039 kubelet[2681]: W0213 15:34:31.285036 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.285094 kubelet[2681]: E0213 15:34:31.285050 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.285299 kubelet[2681]: E0213 15:34:31.285275 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.285299 kubelet[2681]: W0213 15:34:31.285286 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.285366 kubelet[2681]: E0213 15:34:31.285329 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.285588 kubelet[2681]: E0213 15:34:31.285531 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.285588 kubelet[2681]: W0213 15:34:31.285550 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.285676 kubelet[2681]: E0213 15:34:31.285602 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.285799 kubelet[2681]: E0213 15:34:31.285774 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.285799 kubelet[2681]: W0213 15:34:31.285787 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.286008 kubelet[2681]: E0213 15:34:31.285977 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.286171 kubelet[2681]: E0213 15:34:31.286152 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.286171 kubelet[2681]: W0213 15:34:31.286163 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.286248 kubelet[2681]: E0213 15:34:31.286187 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.286417 kubelet[2681]: E0213 15:34:31.286400 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.286417 kubelet[2681]: W0213 15:34:31.286411 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.286508 kubelet[2681]: E0213 15:34:31.286439 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.286738 kubelet[2681]: E0213 15:34:31.286688 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.286738 kubelet[2681]: W0213 15:34:31.286714 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.286837 kubelet[2681]: E0213 15:34:31.286755 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.287019 kubelet[2681]: E0213 15:34:31.287002 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.287019 kubelet[2681]: W0213 15:34:31.287019 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.287132 kubelet[2681]: E0213 15:34:31.287037 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.287319 kubelet[2681]: E0213 15:34:31.287300 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.287634 kubelet[2681]: W0213 15:34:31.287399 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.287634 kubelet[2681]: E0213 15:34:31.287427 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.288000 kubelet[2681]: E0213 15:34:31.287965 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.288000 kubelet[2681]: W0213 15:34:31.287983 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.288327 kubelet[2681]: E0213 15:34:31.288244 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.288327 kubelet[2681]: E0213 15:34:31.288262 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.288327 kubelet[2681]: W0213 15:34:31.288323 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.288430 kubelet[2681]: E0213 15:34:31.288372 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.288658 kubelet[2681]: E0213 15:34:31.288632 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.288658 kubelet[2681]: W0213 15:34:31.288655 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.288721 kubelet[2681]: E0213 15:34:31.288683 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.288904 kubelet[2681]: E0213 15:34:31.288881 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.288904 kubelet[2681]: W0213 15:34:31.288904 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.288904 kubelet[2681]: E0213 15:34:31.288913 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.295541 kubelet[2681]: E0213 15:34:31.295510 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:31.295541 kubelet[2681]: W0213 15:34:31.295534 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:31.295654 kubelet[2681]: E0213 15:34:31.295552 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:31.329044 kubelet[2681]: E0213 15:34:31.329007 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:31.329828 containerd[1466]: time="2025-02-13T15:34:31.329596190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547b6f6d8d-29mrg,Uid:29ecdef8-370a-401d-997d-912b36838f29,Namespace:calico-system,Attempt:0,}" Feb 13 15:34:31.356172 containerd[1466]: time="2025-02-13T15:34:31.355921238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:31.356172 containerd[1466]: time="2025-02-13T15:34:31.356005256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:31.356172 containerd[1466]: time="2025-02-13T15:34:31.356024984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:31.356395 containerd[1466]: time="2025-02-13T15:34:31.356193450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:31.358682 kubelet[2681]: E0213 15:34:31.358557 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:31.359444 containerd[1466]: time="2025-02-13T15:34:31.359128584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5jh6l,Uid:84b0920c-d3a2-4f13-9358-73cb1e7b9d22,Namespace:calico-system,Attempt:0,}" Feb 13 15:34:31.381644 systemd[1]: Started cri-containerd-e0c81bd3d26375202c37fdb9c860b0425ec5c974ae2f8732be9c2e77b4780edd.scope - libcontainer container e0c81bd3d26375202c37fdb9c860b0425ec5c974ae2f8732be9c2e77b4780edd. Feb 13 15:34:31.389296 containerd[1466]: time="2025-02-13T15:34:31.389115002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:31.389878 containerd[1466]: time="2025-02-13T15:34:31.389563335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:31.389952 containerd[1466]: time="2025-02-13T15:34:31.389855434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:31.390142 containerd[1466]: time="2025-02-13T15:34:31.390092520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:31.414682 systemd[1]: Started cri-containerd-36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed.scope - libcontainer container 36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed. Feb 13 15:34:31.439407 containerd[1466]: time="2025-02-13T15:34:31.439236109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547b6f6d8d-29mrg,Uid:29ecdef8-370a-401d-997d-912b36838f29,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0c81bd3d26375202c37fdb9c860b0425ec5c974ae2f8732be9c2e77b4780edd\"" Feb 13 15:34:31.441395 kubelet[2681]: E0213 15:34:31.441360 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:31.444986 containerd[1466]: time="2025-02-13T15:34:31.444800083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:34:31.449577 containerd[1466]: time="2025-02-13T15:34:31.449532795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5jh6l,Uid:84b0920c-d3a2-4f13-9358-73cb1e7b9d22,Namespace:calico-system,Attempt:0,} returns sandbox id \"36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed\"" Feb 13 15:34:31.450387 kubelet[2681]: E0213 15:34:31.450349 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:33.030289 kubelet[2681]: E0213 15:34:33.030222 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:34.567238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375468538.mount: Deactivated successfully. Feb 13 15:34:35.030711 kubelet[2681]: E0213 15:34:35.030667 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:36.811979 containerd[1466]: time="2025-02-13T15:34:36.811897034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:37.131052 kubelet[2681]: E0213 15:34:37.130895 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:37.494511 containerd[1466]: time="2025-02-13T15:34:37.494437258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 15:34:37.673450 containerd[1466]: time="2025-02-13T15:34:37.673389333Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:37.678006 containerd[1466]: time="2025-02-13T15:34:37.677969905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:37.678737 containerd[1466]: time="2025-02-13T15:34:37.678697712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 6.233867431s" Feb 13 15:34:37.678737 containerd[1466]: time="2025-02-13T15:34:37.678728720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 15:34:37.680818 containerd[1466]: time="2025-02-13T15:34:37.680794107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:34:37.687470 containerd[1466]: time="2025-02-13T15:34:37.687433955Z" level=info msg="CreateContainer within sandbox \"e0c81bd3d26375202c37fdb9c860b0425ec5c974ae2f8732be9c2e77b4780edd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:34:37.715398 containerd[1466]: time="2025-02-13T15:34:37.715347658Z" level=info msg="CreateContainer within sandbox \"e0c81bd3d26375202c37fdb9c860b0425ec5c974ae2f8732be9c2e77b4780edd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b83b90507b9032ffaae8bbe2443a362fd6f7e27bfde3077b74624c27cffdaa67\"" Feb 13 15:34:37.716053 containerd[1466]: time="2025-02-13T15:34:37.715877944Z" level=info msg="StartContainer for \"b83b90507b9032ffaae8bbe2443a362fd6f7e27bfde3077b74624c27cffdaa67\"" Feb 13 15:34:37.751675 systemd[1]: Started cri-containerd-b83b90507b9032ffaae8bbe2443a362fd6f7e27bfde3077b74624c27cffdaa67.scope - libcontainer container b83b90507b9032ffaae8bbe2443a362fd6f7e27bfde3077b74624c27cffdaa67. Feb 13 15:34:37.796678 containerd[1466]: time="2025-02-13T15:34:37.796637195Z" level=info msg="StartContainer for \"b83b90507b9032ffaae8bbe2443a362fd6f7e27bfde3077b74624c27cffdaa67\" returns successfully" Feb 13 15:34:38.138662 kubelet[2681]: E0213 15:34:38.138358 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:38.153775 kubelet[2681]: I0213 15:34:38.153670 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-547b6f6d8d-29mrg" podStartSLOduration=1.915137374 podStartE2EDuration="8.153653046s" podCreationTimestamp="2025-02-13 15:34:30 +0000 UTC" firstStartedPulling="2025-02-13 15:34:31.442143231 +0000 UTC m=+25.480076097" lastFinishedPulling="2025-02-13 15:34:37.680658903 +0000 UTC m=+31.718591769" observedRunningTime="2025-02-13 15:34:38.152594839 +0000 UTC m=+32.190527725" watchObservedRunningTime="2025-02-13 15:34:38.153653046 +0000 UTC m=+32.191585912" Feb 13 15:34:38.216774 kubelet[2681]: E0213 15:34:38.216732 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.216774 kubelet[2681]: W0213 15:34:38.216759 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.216774 kubelet[2681]: E0213 15:34:38.216782 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.216983 kubelet[2681]: E0213 15:34:38.216977 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.217019 kubelet[2681]: W0213 15:34:38.216985 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.217019 kubelet[2681]: E0213 15:34:38.216995 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.217206 kubelet[2681]: E0213 15:34:38.217184 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.217206 kubelet[2681]: W0213 15:34:38.217195 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.217206 kubelet[2681]: E0213 15:34:38.217203 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.217408 kubelet[2681]: E0213 15:34:38.217379 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.217408 kubelet[2681]: W0213 15:34:38.217390 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.217408 kubelet[2681]: E0213 15:34:38.217398 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.217654 kubelet[2681]: E0213 15:34:38.217590 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.217654 kubelet[2681]: W0213 15:34:38.217598 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.217654 kubelet[2681]: E0213 15:34:38.217607 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.217813 kubelet[2681]: E0213 15:34:38.217800 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.217813 kubelet[2681]: W0213 15:34:38.217809 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.217890 kubelet[2681]: E0213 15:34:38.217818 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.217999 kubelet[2681]: E0213 15:34:38.217984 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.217999 kubelet[2681]: W0213 15:34:38.217992 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.218055 kubelet[2681]: E0213 15:34:38.218000 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.218193 kubelet[2681]: E0213 15:34:38.218179 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.218193 kubelet[2681]: W0213 15:34:38.218189 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.218360 kubelet[2681]: E0213 15:34:38.218197 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.218395 kubelet[2681]: E0213 15:34:38.218366 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.218395 kubelet[2681]: W0213 15:34:38.218372 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.218395 kubelet[2681]: E0213 15:34:38.218379 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.218593 kubelet[2681]: E0213 15:34:38.218579 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.218593 kubelet[2681]: W0213 15:34:38.218587 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.218662 kubelet[2681]: E0213 15:34:38.218595 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.218777 kubelet[2681]: E0213 15:34:38.218762 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.218777 kubelet[2681]: W0213 15:34:38.218770 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.218777 kubelet[2681]: E0213 15:34:38.218778 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.218952 kubelet[2681]: E0213 15:34:38.218940 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.218952 kubelet[2681]: W0213 15:34:38.218948 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.219013 kubelet[2681]: E0213 15:34:38.218955 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.219151 kubelet[2681]: E0213 15:34:38.219134 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.219151 kubelet[2681]: W0213 15:34:38.219142 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.219151 kubelet[2681]: E0213 15:34:38.219149 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.219321 kubelet[2681]: E0213 15:34:38.219308 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.219321 kubelet[2681]: W0213 15:34:38.219318 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.219377 kubelet[2681]: E0213 15:34:38.219325 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.219507 kubelet[2681]: E0213 15:34:38.219492 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.219507 kubelet[2681]: W0213 15:34:38.219501 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.219589 kubelet[2681]: E0213 15:34:38.219508 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.232870 kubelet[2681]: E0213 15:34:38.232836 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.232870 kubelet[2681]: W0213 15:34:38.232857 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.232973 kubelet[2681]: E0213 15:34:38.232877 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.233196 kubelet[2681]: E0213 15:34:38.233171 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.233196 kubelet[2681]: W0213 15:34:38.233183 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.233247 kubelet[2681]: E0213 15:34:38.233202 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.233412 kubelet[2681]: E0213 15:34:38.233386 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.233412 kubelet[2681]: W0213 15:34:38.233398 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.233412 kubelet[2681]: E0213 15:34:38.233410 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.233658 kubelet[2681]: E0213 15:34:38.233637 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.233658 kubelet[2681]: W0213 15:34:38.233653 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.233711 kubelet[2681]: E0213 15:34:38.233672 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.233880 kubelet[2681]: E0213 15:34:38.233866 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.233880 kubelet[2681]: W0213 15:34:38.233875 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.233933 kubelet[2681]: E0213 15:34:38.233888 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.234072 kubelet[2681]: E0213 15:34:38.234058 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.234072 kubelet[2681]: W0213 15:34:38.234066 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.234129 kubelet[2681]: E0213 15:34:38.234080 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.234313 kubelet[2681]: E0213 15:34:38.234298 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.234313 kubelet[2681]: W0213 15:34:38.234308 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.234361 kubelet[2681]: E0213 15:34:38.234320 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.237497 kubelet[2681]: E0213 15:34:38.234908 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.237497 kubelet[2681]: W0213 15:34:38.234927 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.237497 kubelet[2681]: E0213 15:34:38.234939 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.237497 kubelet[2681]: E0213 15:34:38.235149 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.237497 kubelet[2681]: W0213 15:34:38.235157 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.237497 kubelet[2681]: E0213 15:34:38.235166 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.237667 kubelet[2681]: E0213 15:34:38.237604 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.237667 kubelet[2681]: W0213 15:34:38.237619 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.237667 kubelet[2681]: E0213 15:34:38.237654 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.238101 kubelet[2681]: E0213 15:34:38.238077 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.238101 kubelet[2681]: W0213 15:34:38.238090 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.238239 kubelet[2681]: E0213 15:34:38.238139 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.238343 kubelet[2681]: E0213 15:34:38.238327 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.238343 kubelet[2681]: W0213 15:34:38.238341 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.238399 kubelet[2681]: E0213 15:34:38.238360 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.238613 kubelet[2681]: E0213 15:34:38.238599 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.238613 kubelet[2681]: W0213 15:34:38.238611 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.238682 kubelet[2681]: E0213 15:34:38.238627 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.238928 kubelet[2681]: E0213 15:34:38.238913 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.238928 kubelet[2681]: W0213 15:34:38.238926 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.238986 kubelet[2681]: E0213 15:34:38.238941 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.239164 kubelet[2681]: E0213 15:34:38.239141 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.239164 kubelet[2681]: W0213 15:34:38.239154 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.239229 kubelet[2681]: E0213 15:34:38.239165 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.239433 kubelet[2681]: E0213 15:34:38.239405 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.239433 kubelet[2681]: W0213 15:34:38.239419 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.239433 kubelet[2681]: E0213 15:34:38.239433 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.239755 kubelet[2681]: E0213 15:34:38.239737 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.239755 kubelet[2681]: W0213 15:34:38.239751 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.239840 kubelet[2681]: E0213 15:34:38.239766 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.240011 kubelet[2681]: E0213 15:34:38.239992 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:38.240011 kubelet[2681]: W0213 15:34:38.240005 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:38.240091 kubelet[2681]: E0213 15:34:38.240017 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:38.693890 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:51582.service - OpenSSH per-connection server daemon (10.0.0.1:51582). Feb 13 15:34:38.733306 sshd[3294]: Accepted publickey for core from 10.0.0.1 port 51582 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:38.734928 sshd-session[3294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:38.738935 systemd-logind[1449]: New session 10 of user core. Feb 13 15:34:38.749657 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:34:38.866534 sshd[3296]: Connection closed by 10.0.0.1 port 51582 Feb 13 15:34:38.867037 sshd-session[3294]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:38.871078 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:51582.service: Deactivated successfully. Feb 13 15:34:38.873035 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:34:38.873710 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:34:38.874822 systemd-logind[1449]: Removed session 10. Feb 13 15:34:39.030541 kubelet[2681]: E0213 15:34:39.030353 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:39.139450 kubelet[2681]: I0213 15:34:39.139413 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:34:39.140193 kubelet[2681]: E0213 15:34:39.140165 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:39.226328 kubelet[2681]: E0213 15:34:39.226284 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.226328 kubelet[2681]: W0213 15:34:39.226309 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.226328 kubelet[2681]: E0213 15:34:39.226331 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.226634 kubelet[2681]: E0213 15:34:39.226609 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.226634 kubelet[2681]: W0213 15:34:39.226623 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.226634 kubelet[2681]: E0213 15:34:39.226633 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.226917 kubelet[2681]: E0213 15:34:39.226903 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.226917 kubelet[2681]: W0213 15:34:39.226914 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.226980 kubelet[2681]: E0213 15:34:39.226923 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.227151 kubelet[2681]: E0213 15:34:39.227136 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.227151 kubelet[2681]: W0213 15:34:39.227148 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.227237 kubelet[2681]: E0213 15:34:39.227157 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.227363 kubelet[2681]: E0213 15:34:39.227348 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.227363 kubelet[2681]: W0213 15:34:39.227359 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.227420 kubelet[2681]: E0213 15:34:39.227368 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.227599 kubelet[2681]: E0213 15:34:39.227583 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.227599 kubelet[2681]: W0213 15:34:39.227596 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.227645 kubelet[2681]: E0213 15:34:39.227606 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.227807 kubelet[2681]: E0213 15:34:39.227793 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.227807 kubelet[2681]: W0213 15:34:39.227804 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.227857 kubelet[2681]: E0213 15:34:39.227816 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.228008 kubelet[2681]: E0213 15:34:39.227994 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.228008 kubelet[2681]: W0213 15:34:39.228005 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.228056 kubelet[2681]: E0213 15:34:39.228015 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.228218 kubelet[2681]: E0213 15:34:39.228204 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.228218 kubelet[2681]: W0213 15:34:39.228215 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.228268 kubelet[2681]: E0213 15:34:39.228224 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.228404 kubelet[2681]: E0213 15:34:39.228390 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.228404 kubelet[2681]: W0213 15:34:39.228401 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.228449 kubelet[2681]: E0213 15:34:39.228409 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.228637 kubelet[2681]: E0213 15:34:39.228622 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.228637 kubelet[2681]: W0213 15:34:39.228633 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.228700 kubelet[2681]: E0213 15:34:39.228643 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.228873 kubelet[2681]: E0213 15:34:39.228834 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.228873 kubelet[2681]: W0213 15:34:39.228848 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.228873 kubelet[2681]: E0213 15:34:39.228856 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.229072 kubelet[2681]: E0213 15:34:39.229049 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.229072 kubelet[2681]: W0213 15:34:39.229056 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.229072 kubelet[2681]: E0213 15:34:39.229064 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.229257 kubelet[2681]: E0213 15:34:39.229245 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.229257 kubelet[2681]: W0213 15:34:39.229254 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.229307 kubelet[2681]: E0213 15:34:39.229262 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.229438 kubelet[2681]: E0213 15:34:39.229427 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.229438 kubelet[2681]: W0213 15:34:39.229436 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.229510 kubelet[2681]: E0213 15:34:39.229444 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.240035 kubelet[2681]: E0213 15:34:39.239999 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.240035 kubelet[2681]: W0213 15:34:39.240026 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.240126 kubelet[2681]: E0213 15:34:39.240046 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.240363 kubelet[2681]: E0213 15:34:39.240331 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.240363 kubelet[2681]: W0213 15:34:39.240347 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.240363 kubelet[2681]: E0213 15:34:39.240366 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.240684 kubelet[2681]: E0213 15:34:39.240668 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.240684 kubelet[2681]: W0213 15:34:39.240679 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.240684 kubelet[2681]: E0213 15:34:39.240694 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.240942 kubelet[2681]: E0213 15:34:39.240912 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.240942 kubelet[2681]: W0213 15:34:39.240923 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.240942 kubelet[2681]: E0213 15:34:39.240938 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.241192 kubelet[2681]: E0213 15:34:39.241174 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.241192 kubelet[2681]: W0213 15:34:39.241183 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.241261 kubelet[2681]: E0213 15:34:39.241197 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.241406 kubelet[2681]: E0213 15:34:39.241390 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.241406 kubelet[2681]: W0213 15:34:39.241400 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.241492 kubelet[2681]: E0213 15:34:39.241441 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.241620 kubelet[2681]: E0213 15:34:39.241605 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.241620 kubelet[2681]: W0213 15:34:39.241614 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.241695 kubelet[2681]: E0213 15:34:39.241640 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.241821 kubelet[2681]: E0213 15:34:39.241806 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.241821 kubelet[2681]: W0213 15:34:39.241815 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.241881 kubelet[2681]: E0213 15:34:39.241840 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.242017 kubelet[2681]: E0213 15:34:39.241995 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.242017 kubelet[2681]: W0213 15:34:39.242004 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.242075 kubelet[2681]: E0213 15:34:39.242017 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.242400 kubelet[2681]: E0213 15:34:39.242355 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.242400 kubelet[2681]: W0213 15:34:39.242377 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.242400 kubelet[2681]: E0213 15:34:39.242400 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.242653 kubelet[2681]: E0213 15:34:39.242607 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.242653 kubelet[2681]: W0213 15:34:39.242621 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.242653 kubelet[2681]: E0213 15:34:39.242637 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.242879 kubelet[2681]: E0213 15:34:39.242860 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.242879 kubelet[2681]: W0213 15:34:39.242873 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.242953 kubelet[2681]: E0213 15:34:39.242889 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.243294 kubelet[2681]: E0213 15:34:39.243273 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.243294 kubelet[2681]: W0213 15:34:39.243289 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.243377 kubelet[2681]: E0213 15:34:39.243307 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.243575 kubelet[2681]: E0213 15:34:39.243558 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.243575 kubelet[2681]: W0213 15:34:39.243573 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.243768 kubelet[2681]: E0213 15:34:39.243682 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.243921 kubelet[2681]: E0213 15:34:39.243903 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.243921 kubelet[2681]: W0213 15:34:39.243916 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.243972 kubelet[2681]: E0213 15:34:39.243952 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.244844 kubelet[2681]: E0213 15:34:39.244820 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.244844 kubelet[2681]: W0213 15:34:39.244835 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.244903 kubelet[2681]: E0213 15:34:39.244854 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.245901 kubelet[2681]: E0213 15:34:39.245856 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.245901 kubelet[2681]: W0213 15:34:39.245890 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.245973 kubelet[2681]: E0213 15:34:39.245922 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:39.246740 kubelet[2681]: E0213 15:34:39.246712 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:39.246740 kubelet[2681]: W0213 15:34:39.246731 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:39.246807 kubelet[2681]: E0213 15:34:39.246746 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.030634 kubelet[2681]: E0213 15:34:41.030258 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:41.610215 kubelet[2681]: I0213 15:34:41.610179 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:34:41.610851 kubelet[2681]: E0213 15:34:41.610836 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:41.632000 containerd[1466]: time="2025-02-13T15:34:41.631936810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:41.644314 kubelet[2681]: E0213 15:34:41.644279 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.644314 kubelet[2681]: W0213 15:34:41.644303 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.644417 kubelet[2681]: E0213 15:34:41.644324 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.644609 kubelet[2681]: E0213 15:34:41.644592 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.644609 kubelet[2681]: W0213 15:34:41.644602 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.644680 kubelet[2681]: E0213 15:34:41.644612 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.644796 kubelet[2681]: E0213 15:34:41.644785 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.644822 kubelet[2681]: W0213 15:34:41.644796 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.644822 kubelet[2681]: E0213 15:34:41.644804 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.644981 kubelet[2681]: E0213 15:34:41.644971 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.644981 kubelet[2681]: W0213 15:34:41.644979 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.645033 kubelet[2681]: E0213 15:34:41.644986 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.645178 kubelet[2681]: E0213 15:34:41.645168 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.645178 kubelet[2681]: W0213 15:34:41.645177 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.645230 kubelet[2681]: E0213 15:34:41.645184 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.645359 kubelet[2681]: E0213 15:34:41.645348 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.645359 kubelet[2681]: W0213 15:34:41.645357 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.645413 kubelet[2681]: E0213 15:34:41.645364 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.645567 kubelet[2681]: E0213 15:34:41.645555 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.645567 kubelet[2681]: W0213 15:34:41.645565 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.645620 kubelet[2681]: E0213 15:34:41.645574 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.645767 kubelet[2681]: E0213 15:34:41.645757 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.645767 kubelet[2681]: W0213 15:34:41.645765 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.645876 kubelet[2681]: E0213 15:34:41.645773 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.645966 kubelet[2681]: E0213 15:34:41.645956 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.645966 kubelet[2681]: W0213 15:34:41.645964 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.646014 kubelet[2681]: E0213 15:34:41.645972 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.646167 kubelet[2681]: E0213 15:34:41.646156 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.646167 kubelet[2681]: W0213 15:34:41.646165 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.646225 kubelet[2681]: E0213 15:34:41.646173 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.646346 kubelet[2681]: E0213 15:34:41.646335 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.646346 kubelet[2681]: W0213 15:34:41.646345 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.646386 kubelet[2681]: E0213 15:34:41.646352 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.646568 kubelet[2681]: E0213 15:34:41.646548 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.646568 kubelet[2681]: W0213 15:34:41.646566 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.646616 kubelet[2681]: E0213 15:34:41.646574 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.646792 kubelet[2681]: E0213 15:34:41.646777 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.646792 kubelet[2681]: W0213 15:34:41.646788 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.646846 kubelet[2681]: E0213 15:34:41.646797 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.646990 kubelet[2681]: E0213 15:34:41.646977 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.646990 kubelet[2681]: W0213 15:34:41.646985 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.647066 kubelet[2681]: E0213 15:34:41.646992 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.647189 kubelet[2681]: E0213 15:34:41.647166 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.647189 kubelet[2681]: W0213 15:34:41.647177 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.647189 kubelet[2681]: E0213 15:34:41.647185 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.657544 kubelet[2681]: E0213 15:34:41.657524 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.657544 kubelet[2681]: W0213 15:34:41.657540 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.657635 kubelet[2681]: E0213 15:34:41.657554 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.657815 kubelet[2681]: E0213 15:34:41.657778 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.657815 kubelet[2681]: W0213 15:34:41.657798 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.657940 kubelet[2681]: E0213 15:34:41.657826 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.658028 kubelet[2681]: E0213 15:34:41.658012 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.658028 kubelet[2681]: W0213 15:34:41.658027 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.658133 kubelet[2681]: E0213 15:34:41.658052 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.658262 kubelet[2681]: E0213 15:34:41.658250 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.658262 kubelet[2681]: W0213 15:34:41.658259 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.658324 kubelet[2681]: E0213 15:34:41.658270 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.658469 kubelet[2681]: E0213 15:34:41.658449 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.658469 kubelet[2681]: W0213 15:34:41.658461 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.658558 kubelet[2681]: E0213 15:34:41.658489 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.658694 kubelet[2681]: E0213 15:34:41.658680 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.658694 kubelet[2681]: W0213 15:34:41.658690 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.658756 kubelet[2681]: E0213 15:34:41.658704 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.659048 kubelet[2681]: E0213 15:34:41.659021 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.659048 kubelet[2681]: W0213 15:34:41.659043 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.659125 kubelet[2681]: E0213 15:34:41.659058 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.659249 kubelet[2681]: E0213 15:34:41.659236 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.659249 kubelet[2681]: W0213 15:34:41.659247 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.659309 kubelet[2681]: E0213 15:34:41.659262 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.659447 kubelet[2681]: E0213 15:34:41.659433 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.659491 kubelet[2681]: W0213 15:34:41.659452 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.659491 kubelet[2681]: E0213 15:34:41.659466 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.659678 kubelet[2681]: E0213 15:34:41.659665 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.659678 kubelet[2681]: W0213 15:34:41.659674 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.659736 kubelet[2681]: E0213 15:34:41.659685 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.659883 kubelet[2681]: E0213 15:34:41.659871 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.659883 kubelet[2681]: W0213 15:34:41.659880 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.659933 kubelet[2681]: E0213 15:34:41.659891 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.660199 kubelet[2681]: E0213 15:34:41.660181 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.660199 kubelet[2681]: W0213 15:34:41.660193 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.660255 kubelet[2681]: E0213 15:34:41.660211 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.660426 kubelet[2681]: E0213 15:34:41.660412 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.660426 kubelet[2681]: W0213 15:34:41.660423 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.660549 kubelet[2681]: E0213 15:34:41.660436 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.660646 kubelet[2681]: E0213 15:34:41.660634 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.660646 kubelet[2681]: W0213 15:34:41.660645 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.660689 kubelet[2681]: E0213 15:34:41.660658 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.660852 kubelet[2681]: E0213 15:34:41.660841 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.660852 kubelet[2681]: W0213 15:34:41.660850 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.660908 kubelet[2681]: E0213 15:34:41.660857 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.661022 kubelet[2681]: E0213 15:34:41.661011 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.661022 kubelet[2681]: W0213 15:34:41.661019 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.661073 kubelet[2681]: E0213 15:34:41.661027 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.661217 kubelet[2681]: E0213 15:34:41.661206 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.661217 kubelet[2681]: W0213 15:34:41.661215 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.661275 kubelet[2681]: E0213 15:34:41.661223 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.661650 kubelet[2681]: E0213 15:34:41.661637 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:41.661650 kubelet[2681]: W0213 15:34:41.661647 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:41.661697 kubelet[2681]: E0213 15:34:41.661654 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:41.689551 containerd[1466]: time="2025-02-13T15:34:41.689462845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 15:34:41.753835 containerd[1466]: time="2025-02-13T15:34:41.753787924Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:41.808277 containerd[1466]: time="2025-02-13T15:34:41.808205815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:41.809214 containerd[1466]: time="2025-02-13T15:34:41.809158512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 4.128330452s" Feb 13 15:34:41.809251 containerd[1466]: time="2025-02-13T15:34:41.809212493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:34:41.811544 containerd[1466]: time="2025-02-13T15:34:41.811508514Z" level=info msg="CreateContainer within sandbox \"36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:34:42.144665 kubelet[2681]: E0213 15:34:42.144625 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:42.150839 kubelet[2681]: E0213 15:34:42.150818 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.150839 kubelet[2681]: W0213 15:34:42.150837 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.150919 kubelet[2681]: E0213 15:34:42.150859 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.151077 kubelet[2681]: E0213 15:34:42.151063 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.151077 kubelet[2681]: W0213 15:34:42.151074 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.151127 kubelet[2681]: E0213 15:34:42.151083 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.151291 kubelet[2681]: E0213 15:34:42.151268 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.151291 kubelet[2681]: W0213 15:34:42.151278 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.151291 kubelet[2681]: E0213 15:34:42.151287 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.151490 kubelet[2681]: E0213 15:34:42.151464 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.151520 kubelet[2681]: W0213 15:34:42.151506 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.151520 kubelet[2681]: E0213 15:34:42.151515 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.151757 kubelet[2681]: E0213 15:34:42.151743 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.151757 kubelet[2681]: W0213 15:34:42.151752 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.151806 kubelet[2681]: E0213 15:34:42.151760 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.151948 kubelet[2681]: E0213 15:34:42.151935 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.151948 kubelet[2681]: W0213 15:34:42.151944 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.151997 kubelet[2681]: E0213 15:34:42.151951 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.152171 kubelet[2681]: E0213 15:34:42.152158 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.152171 kubelet[2681]: W0213 15:34:42.152168 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.152224 kubelet[2681]: E0213 15:34:42.152177 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.152372 kubelet[2681]: E0213 15:34:42.152358 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.152372 kubelet[2681]: W0213 15:34:42.152368 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.152417 kubelet[2681]: E0213 15:34:42.152376 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.152598 kubelet[2681]: E0213 15:34:42.152572 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.152598 kubelet[2681]: W0213 15:34:42.152584 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.152598 kubelet[2681]: E0213 15:34:42.152592 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.152783 kubelet[2681]: E0213 15:34:42.152769 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.152783 kubelet[2681]: W0213 15:34:42.152780 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.152825 kubelet[2681]: E0213 15:34:42.152788 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.153003 kubelet[2681]: E0213 15:34:42.152989 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.153003 kubelet[2681]: W0213 15:34:42.152999 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.153075 kubelet[2681]: E0213 15:34:42.153007 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.153198 kubelet[2681]: E0213 15:34:42.153184 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.153198 kubelet[2681]: W0213 15:34:42.153195 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.153244 kubelet[2681]: E0213 15:34:42.153202 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.153394 kubelet[2681]: E0213 15:34:42.153381 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.153394 kubelet[2681]: W0213 15:34:42.153391 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.153442 kubelet[2681]: E0213 15:34:42.153399 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.153601 kubelet[2681]: E0213 15:34:42.153588 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.153601 kubelet[2681]: W0213 15:34:42.153598 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.153655 kubelet[2681]: E0213 15:34:42.153606 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.153787 kubelet[2681]: E0213 15:34:42.153774 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.153787 kubelet[2681]: W0213 15:34:42.153783 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.153833 kubelet[2681]: E0213 15:34:42.153791 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.162151 kubelet[2681]: E0213 15:34:42.162132 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.162151 kubelet[2681]: W0213 15:34:42.162147 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.162223 kubelet[2681]: E0213 15:34:42.162158 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.162410 kubelet[2681]: E0213 15:34:42.162395 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.162410 kubelet[2681]: W0213 15:34:42.162406 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.162459 kubelet[2681]: E0213 15:34:42.162419 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.162661 kubelet[2681]: E0213 15:34:42.162647 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.162661 kubelet[2681]: W0213 15:34:42.162657 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.162709 kubelet[2681]: E0213 15:34:42.162673 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.162930 kubelet[2681]: E0213 15:34:42.162916 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.162958 kubelet[2681]: W0213 15:34:42.162929 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.162958 kubelet[2681]: E0213 15:34:42.162949 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.163149 kubelet[2681]: E0213 15:34:42.163136 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.163149 kubelet[2681]: W0213 15:34:42.163146 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.163198 kubelet[2681]: E0213 15:34:42.163161 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.163352 kubelet[2681]: E0213 15:34:42.163342 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.163352 kubelet[2681]: W0213 15:34:42.163350 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.163396 kubelet[2681]: E0213 15:34:42.163362 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.163582 kubelet[2681]: E0213 15:34:42.163571 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.163582 kubelet[2681]: W0213 15:34:42.163580 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.163626 kubelet[2681]: E0213 15:34:42.163592 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.163820 kubelet[2681]: E0213 15:34:42.163804 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.163820 kubelet[2681]: W0213 15:34:42.163816 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.163869 kubelet[2681]: E0213 15:34:42.163830 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.164102 kubelet[2681]: E0213 15:34:42.164082 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.164102 kubelet[2681]: W0213 15:34:42.164094 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.164151 kubelet[2681]: E0213 15:34:42.164130 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.164289 kubelet[2681]: E0213 15:34:42.164278 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.164289 kubelet[2681]: W0213 15:34:42.164286 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.164332 kubelet[2681]: E0213 15:34:42.164313 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.164465 kubelet[2681]: E0213 15:34:42.164454 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.164465 kubelet[2681]: W0213 15:34:42.164463 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.164514 kubelet[2681]: E0213 15:34:42.164490 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.164680 kubelet[2681]: E0213 15:34:42.164670 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.164680 kubelet[2681]: W0213 15:34:42.164678 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.164726 kubelet[2681]: E0213 15:34:42.164691 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.164907 kubelet[2681]: E0213 15:34:42.164891 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.164934 kubelet[2681]: W0213 15:34:42.164906 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.164934 kubelet[2681]: E0213 15:34:42.164926 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.165115 kubelet[2681]: E0213 15:34:42.165101 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.165115 kubelet[2681]: W0213 15:34:42.165112 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.165165 kubelet[2681]: E0213 15:34:42.165125 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.165360 kubelet[2681]: E0213 15:34:42.165341 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.165360 kubelet[2681]: W0213 15:34:42.165354 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.165445 kubelet[2681]: E0213 15:34:42.165371 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.165616 kubelet[2681]: E0213 15:34:42.165600 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.165616 kubelet[2681]: W0213 15:34:42.165613 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.165668 kubelet[2681]: E0213 15:34:42.165627 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.165805 kubelet[2681]: E0213 15:34:42.165790 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.165805 kubelet[2681]: W0213 15:34:42.165801 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.165851 kubelet[2681]: E0213 15:34:42.165810 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.166032 kubelet[2681]: E0213 15:34:42.166008 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:34:42.166032 kubelet[2681]: W0213 15:34:42.166019 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:34:42.166079 kubelet[2681]: E0213 15:34:42.166035 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:34:42.451856 containerd[1466]: time="2025-02-13T15:34:42.451789501Z" level=info msg="CreateContainer within sandbox \"36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012\"" Feb 13 15:34:42.452378 containerd[1466]: time="2025-02-13T15:34:42.452347398Z" level=info msg="StartContainer for \"eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012\"" Feb 13 15:34:42.487627 systemd[1]: Started cri-containerd-eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012.scope - libcontainer container eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012. Feb 13 15:34:42.548785 systemd[1]: cri-containerd-eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012.scope: Deactivated successfully. Feb 13 15:34:43.031095 kubelet[2681]: E0213 15:34:43.031046 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:43.269203 containerd[1466]: time="2025-02-13T15:34:43.269139595Z" level=info msg="StartContainer for \"eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012\" returns successfully" Feb 13 15:34:43.291978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012-rootfs.mount: Deactivated successfully. Feb 13 15:34:43.294787 containerd[1466]: time="2025-02-13T15:34:43.294723896Z" level=info msg="shim disconnected" id=eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012 namespace=k8s.io Feb 13 15:34:43.294894 containerd[1466]: time="2025-02-13T15:34:43.294790170Z" level=warning msg="cleaning up after shim disconnected" id=eaa1c25d3f5baaf6e67776e198cfac8f92c7dca6c9ed52829be36929b66f6012 namespace=k8s.io Feb 13 15:34:43.294894 containerd[1466]: time="2025-02-13T15:34:43.294802403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:43.882695 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:51598.service - OpenSSH per-connection server daemon (10.0.0.1:51598). Feb 13 15:34:43.933642 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 51598 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:43.934952 sshd-session[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:43.938905 systemd-logind[1449]: New session 11 of user core. Feb 13 15:34:43.945600 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:34:44.057676 sshd[3500]: Connection closed by 10.0.0.1 port 51598 Feb 13 15:34:44.058020 sshd-session[3498]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:44.062278 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:51598.service: Deactivated successfully. Feb 13 15:34:44.064297 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:34:44.064943 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:34:44.065878 systemd-logind[1449]: Removed session 11. Feb 13 15:34:44.274431 kubelet[2681]: E0213 15:34:44.274400 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:44.275006 containerd[1466]: time="2025-02-13T15:34:44.274950454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:34:45.030686 kubelet[2681]: E0213 15:34:45.030631 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:47.030554 kubelet[2681]: E0213 15:34:47.030409 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:49.030430 kubelet[2681]: E0213 15:34:49.030350 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:49.069779 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:51930.service - OpenSSH per-connection server daemon (10.0.0.1:51930). Feb 13 15:34:49.123149 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 51930 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:49.125363 sshd-session[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:49.132551 systemd-logind[1449]: New session 12 of user core. Feb 13 15:34:49.137783 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:34:49.297681 sshd[3530]: Connection closed by 10.0.0.1 port 51930 Feb 13 15:34:49.298146 sshd-session[3528]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:49.301586 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:51930.service: Deactivated successfully. Feb 13 15:34:49.304360 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:34:49.306405 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:34:49.307436 systemd-logind[1449]: Removed session 12. Feb 13 15:34:49.718876 containerd[1466]: time="2025-02-13T15:34:49.718824143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:49.719838 containerd[1466]: time="2025-02-13T15:34:49.719788232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:34:49.720913 containerd[1466]: time="2025-02-13T15:34:49.720887274Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:49.723686 containerd[1466]: time="2025-02-13T15:34:49.723656931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:49.724381 containerd[1466]: time="2025-02-13T15:34:49.724327881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.449335488s" Feb 13 15:34:49.724381 containerd[1466]: time="2025-02-13T15:34:49.724359790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:34:49.726267 containerd[1466]: time="2025-02-13T15:34:49.726239357Z" level=info msg="CreateContainer within sandbox \"36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:34:49.740215 containerd[1466]: time="2025-02-13T15:34:49.740172372Z" level=info msg="CreateContainer within sandbox \"36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c\"" Feb 13 15:34:49.741027 containerd[1466]: time="2025-02-13T15:34:49.740462527Z" level=info msg="StartContainer for \"fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c\"" Feb 13 15:34:49.765172 systemd[1]: run-containerd-runc-k8s.io-fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c-runc.CVWP9K.mount: Deactivated successfully. Feb 13 15:34:49.774654 systemd[1]: Started cri-containerd-fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c.scope - libcontainer container fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c. Feb 13 15:34:49.808545 containerd[1466]: time="2025-02-13T15:34:49.807395150Z" level=info msg="StartContainer for \"fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c\" returns successfully" Feb 13 15:34:50.286722 kubelet[2681]: E0213 15:34:50.286640 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:51.030277 kubelet[2681]: E0213 15:34:51.030193 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:51.288779 kubelet[2681]: E0213 15:34:51.288658 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:52.113868 containerd[1466]: time="2025-02-13T15:34:52.113818774Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:34:52.117007 systemd[1]: cri-containerd-fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c.scope: Deactivated successfully. Feb 13 15:34:52.138927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c-rootfs.mount: Deactivated successfully. Feb 13 15:34:52.198854 kubelet[2681]: I0213 15:34:52.198787 2681 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:34:52.772685 kubelet[2681]: I0213 15:34:52.772002 2681 topology_manager.go:215] "Topology Admit Handler" podUID="a3315812-4b88-46a2-9ae1-f680abf2a097" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8j5t5" Feb 13 15:34:52.772685 kubelet[2681]: I0213 15:34:52.772149 2681 topology_manager.go:215] "Topology Admit Handler" podUID="5cb650da-ab42-472c-a151-9ecc42bf5e46" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hknph" Feb 13 15:34:52.772685 kubelet[2681]: I0213 15:34:52.772213 2681 topology_manager.go:215] "Topology Admit Handler" podUID="9915f1a9-705a-4e2d-8b4c-287c91d00c76" podNamespace="calico-system" podName="calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:34:52.773400 kubelet[2681]: I0213 15:34:52.773211 2681 topology_manager.go:215] "Topology Admit Handler" podUID="866f7a22-7bcf-4b9e-a920-3e5d167eac08" podNamespace="calico-apiserver" podName="calico-apiserver-768d56694b-826sp" Feb 13 15:34:52.774914 kubelet[2681]: I0213 15:34:52.774273 2681 topology_manager.go:215] "Topology Admit Handler" podUID="b07b9bcf-86ee-434f-97e1-b4c13955f24e" podNamespace="calico-apiserver" podName="calico-apiserver-768d56694b-6nms6" Feb 13 15:34:52.780883 systemd[1]: Created slice kubepods-besteffort-pod9915f1a9_705a_4e2d_8b4c_287c91d00c76.slice - libcontainer container kubepods-besteffort-pod9915f1a9_705a_4e2d_8b4c_287c91d00c76.slice. Feb 13 15:34:52.786366 systemd[1]: Created slice kubepods-besteffort-pod866f7a22_7bcf_4b9e_a920_3e5d167eac08.slice - libcontainer container kubepods-besteffort-pod866f7a22_7bcf_4b9e_a920_3e5d167eac08.slice. Feb 13 15:34:52.790148 systemd[1]: Created slice kubepods-burstable-pod5cb650da_ab42_472c_a151_9ecc42bf5e46.slice - libcontainer container kubepods-burstable-pod5cb650da_ab42_472c_a151_9ecc42bf5e46.slice. Feb 13 15:34:52.795661 systemd[1]: Created slice kubepods-burstable-poda3315812_4b88_46a2_9ae1_f680abf2a097.slice - libcontainer container kubepods-burstable-poda3315812_4b88_46a2_9ae1_f680abf2a097.slice. Feb 13 15:34:52.799263 systemd[1]: Created slice kubepods-besteffort-podb07b9bcf_86ee_434f_97e1_b4c13955f24e.slice - libcontainer container kubepods-besteffort-podb07b9bcf_86ee_434f_97e1_b4c13955f24e.slice. Feb 13 15:34:52.930461 kubelet[2681]: I0213 15:34:52.930411 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62f9k\" (UniqueName: \"kubernetes.io/projected/9915f1a9-705a-4e2d-8b4c-287c91d00c76-kube-api-access-62f9k\") pod \"calico-kube-controllers-68948865cb-5zwq2\" (UID: \"9915f1a9-705a-4e2d-8b4c-287c91d00c76\") " pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:34:52.930461 kubelet[2681]: I0213 15:34:52.930454 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b07b9bcf-86ee-434f-97e1-b4c13955f24e-calico-apiserver-certs\") pod \"calico-apiserver-768d56694b-6nms6\" (UID: \"b07b9bcf-86ee-434f-97e1-b4c13955f24e\") " pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" Feb 13 15:34:52.930461 kubelet[2681]: I0213 15:34:52.930493 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzk44\" (UniqueName: \"kubernetes.io/projected/a3315812-4b88-46a2-9ae1-f680abf2a097-kube-api-access-qzk44\") pod \"coredns-7db6d8ff4d-8j5t5\" (UID: \"a3315812-4b88-46a2-9ae1-f680abf2a097\") " pod="kube-system/coredns-7db6d8ff4d-8j5t5" Feb 13 15:34:52.930461 kubelet[2681]: I0213 15:34:52.930512 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttdt\" (UniqueName: \"kubernetes.io/projected/b07b9bcf-86ee-434f-97e1-b4c13955f24e-kube-api-access-kttdt\") pod \"calico-apiserver-768d56694b-6nms6\" (UID: \"b07b9bcf-86ee-434f-97e1-b4c13955f24e\") " pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" Feb 13 15:34:52.930807 kubelet[2681]: I0213 15:34:52.930533 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9915f1a9-705a-4e2d-8b4c-287c91d00c76-tigera-ca-bundle\") pod \"calico-kube-controllers-68948865cb-5zwq2\" (UID: \"9915f1a9-705a-4e2d-8b4c-287c91d00c76\") " pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:34:52.930807 kubelet[2681]: I0213 15:34:52.930635 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t86tw\" (UniqueName: \"kubernetes.io/projected/866f7a22-7bcf-4b9e-a920-3e5d167eac08-kube-api-access-t86tw\") pod \"calico-apiserver-768d56694b-826sp\" (UID: \"866f7a22-7bcf-4b9e-a920-3e5d167eac08\") " pod="calico-apiserver/calico-apiserver-768d56694b-826sp" Feb 13 15:34:52.930807 kubelet[2681]: I0213 15:34:52.930698 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cb650da-ab42-472c-a151-9ecc42bf5e46-config-volume\") pod \"coredns-7db6d8ff4d-hknph\" (UID: \"5cb650da-ab42-472c-a151-9ecc42bf5e46\") " pod="kube-system/coredns-7db6d8ff4d-hknph" Feb 13 15:34:52.930807 kubelet[2681]: I0213 15:34:52.930727 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/866f7a22-7bcf-4b9e-a920-3e5d167eac08-calico-apiserver-certs\") pod \"calico-apiserver-768d56694b-826sp\" (UID: \"866f7a22-7bcf-4b9e-a920-3e5d167eac08\") " pod="calico-apiserver/calico-apiserver-768d56694b-826sp" Feb 13 15:34:52.930807 kubelet[2681]: I0213 15:34:52.930766 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3315812-4b88-46a2-9ae1-f680abf2a097-config-volume\") pod \"coredns-7db6d8ff4d-8j5t5\" (UID: \"a3315812-4b88-46a2-9ae1-f680abf2a097\") " pod="kube-system/coredns-7db6d8ff4d-8j5t5" Feb 13 15:34:52.930932 kubelet[2681]: I0213 15:34:52.930805 2681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4cvp\" (UniqueName: \"kubernetes.io/projected/5cb650da-ab42-472c-a151-9ecc42bf5e46-kube-api-access-z4cvp\") pod \"coredns-7db6d8ff4d-hknph\" (UID: \"5cb650da-ab42-472c-a151-9ecc42bf5e46\") " pod="kube-system/coredns-7db6d8ff4d-hknph" Feb 13 15:34:52.938498 containerd[1466]: time="2025-02-13T15:34:52.938404387Z" level=info msg="shim disconnected" id=fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c namespace=k8s.io Feb 13 15:34:52.938498 containerd[1466]: time="2025-02-13T15:34:52.938469149Z" level=warning msg="cleaning up after shim disconnected" id=fcb76fee5dfcc4b14178ec4d1dacec5fcbe16c3653bec93c6d8d747cc6899d0c namespace=k8s.io Feb 13 15:34:52.938498 containerd[1466]: time="2025-02-13T15:34:52.938507982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:53.063939 systemd[1]: Created slice kubepods-besteffort-pod0dd57d1b_4cfa_4ca9_82c9_3ab2ae38d94c.slice - libcontainer container kubepods-besteffort-pod0dd57d1b_4cfa_4ca9_82c9_3ab2ae38d94c.slice. Feb 13 15:34:53.086278 containerd[1466]: time="2025-02-13T15:34:53.086216634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxsjw,Uid:0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c,Namespace:calico-system,Attempt:0,}" Feb 13 15:34:53.086379 containerd[1466]: time="2025-02-13T15:34:53.086269293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68948865cb-5zwq2,Uid:9915f1a9-705a-4e2d-8b4c-287c91d00c76,Namespace:calico-system,Attempt:0,}" Feb 13 15:34:53.089896 containerd[1466]: time="2025-02-13T15:34:53.089823251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-826sp,Uid:866f7a22-7bcf-4b9e-a920-3e5d167eac08,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:34:53.093333 kubelet[2681]: E0213 15:34:53.093295 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:53.093892 containerd[1466]: time="2025-02-13T15:34:53.093756540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hknph,Uid:5cb650da-ab42-472c-a151-9ecc42bf5e46,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:53.097982 kubelet[2681]: E0213 15:34:53.097960 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:53.098381 containerd[1466]: time="2025-02-13T15:34:53.098348005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8j5t5,Uid:a3315812-4b88-46a2-9ae1-f680abf2a097,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:53.101970 containerd[1466]: time="2025-02-13T15:34:53.101936748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-6nms6,Uid:b07b9bcf-86ee-434f-97e1-b4c13955f24e,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:34:53.293915 kubelet[2681]: E0213 15:34:53.293868 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:53.294566 containerd[1466]: time="2025-02-13T15:34:53.294522785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:34:53.559446 containerd[1466]: time="2025-02-13T15:34:53.558613789Z" level=error msg="Failed to destroy network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.560051 containerd[1466]: time="2025-02-13T15:34:53.560009427Z" level=error msg="Failed to destroy network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.560373 containerd[1466]: time="2025-02-13T15:34:53.560339467Z" level=error msg="encountered an error cleaning up failed sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.560434 containerd[1466]: time="2025-02-13T15:34:53.560408717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-826sp,Uid:866f7a22-7bcf-4b9e-a920-3e5d167eac08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.562655 kubelet[2681]: E0213 15:34:53.560683 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.562655 kubelet[2681]: E0213 15:34:53.560777 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" Feb 13 15:34:53.562655 kubelet[2681]: E0213 15:34:53.560800 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" Feb 13 15:34:53.562922 containerd[1466]: time="2025-02-13T15:34:53.560926678Z" level=error msg="encountered an error cleaning up failed sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.562922 containerd[1466]: time="2025-02-13T15:34:53.561068223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68948865cb-5zwq2,Uid:9915f1a9-705a-4e2d-8b4c-287c91d00c76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.562922 containerd[1466]: time="2025-02-13T15:34:53.562545486Z" level=error msg="Failed to destroy network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.562922 containerd[1466]: time="2025-02-13T15:34:53.562874282Z" level=error msg="encountered an error cleaning up failed sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.562922 containerd[1466]: time="2025-02-13T15:34:53.562916351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-6nms6,Uid:b07b9bcf-86ee-434f-97e1-b4c13955f24e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.563165 kubelet[2681]: E0213 15:34:53.560840 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768d56694b-826sp_calico-apiserver(866f7a22-7bcf-4b9e-a920-3e5d167eac08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768d56694b-826sp_calico-apiserver(866f7a22-7bcf-4b9e-a920-3e5d167eac08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" podUID="866f7a22-7bcf-4b9e-a920-3e5d167eac08" Feb 13 15:34:53.563165 kubelet[2681]: E0213 15:34:53.561796 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.563165 kubelet[2681]: E0213 15:34:53.561819 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:34:53.563296 kubelet[2681]: E0213 15:34:53.561834 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:34:53.563296 kubelet[2681]: E0213 15:34:53.561859 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68948865cb-5zwq2_calico-system(9915f1a9-705a-4e2d-8b4c-287c91d00c76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68948865cb-5zwq2_calico-system(9915f1a9-705a-4e2d-8b4c-287c91d00c76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" podUID="9915f1a9-705a-4e2d-8b4c-287c91d00c76" Feb 13 15:34:53.563296 kubelet[2681]: E0213 15:34:53.563052 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.563427 kubelet[2681]: E0213 15:34:53.563087 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" Feb 13 15:34:53.563427 kubelet[2681]: E0213 15:34:53.563103 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" Feb 13 15:34:53.563427 kubelet[2681]: E0213 15:34:53.563139 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768d56694b-6nms6_calico-apiserver(b07b9bcf-86ee-434f-97e1-b4c13955f24e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768d56694b-6nms6_calico-apiserver(b07b9bcf-86ee-434f-97e1-b4c13955f24e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" podUID="b07b9bcf-86ee-434f-97e1-b4c13955f24e" Feb 13 15:34:53.568238 containerd[1466]: time="2025-02-13T15:34:53.568182191Z" level=error msg="Failed to destroy network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.568552 containerd[1466]: time="2025-02-13T15:34:53.568514274Z" level=error msg="Failed to destroy network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.568747 containerd[1466]: time="2025-02-13T15:34:53.568593864Z" level=error msg="encountered an error cleaning up failed sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.569245 containerd[1466]: time="2025-02-13T15:34:53.569216692Z" level=error msg="encountered an error cleaning up failed sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.570256 containerd[1466]: time="2025-02-13T15:34:53.570215977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxsjw,Uid:0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.570345 containerd[1466]: time="2025-02-13T15:34:53.570317808Z" level=error msg="Failed to destroy network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.570509 kubelet[2681]: E0213 15:34:53.570441 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.570563 containerd[1466]: time="2025-02-13T15:34:53.570333537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8j5t5,Uid:a3315812-4b88-46a2-9ae1-f680abf2a097,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.570866 kubelet[2681]: E0213 15:34:53.570839 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.570910 kubelet[2681]: E0213 15:34:53.570871 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8j5t5" Feb 13 15:34:53.570910 kubelet[2681]: E0213 15:34:53.570891 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8j5t5" Feb 13 15:34:53.570972 kubelet[2681]: E0213 15:34:53.570930 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8j5t5_kube-system(a3315812-4b88-46a2-9ae1-f680abf2a097)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8j5t5_kube-system(a3315812-4b88-46a2-9ae1-f680abf2a097)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8j5t5" podUID="a3315812-4b88-46a2-9ae1-f680abf2a097" Feb 13 15:34:53.571060 kubelet[2681]: E0213 15:34:53.570972 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:53.571060 kubelet[2681]: E0213 15:34:53.570992 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:53.571060 kubelet[2681]: E0213 15:34:53.571018 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kxsjw_calico-system(0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kxsjw_calico-system(0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:53.571199 containerd[1466]: time="2025-02-13T15:34:53.571157904Z" level=error msg="encountered an error cleaning up failed sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.571263 containerd[1466]: time="2025-02-13T15:34:53.571217766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hknph,Uid:5cb650da-ab42-472c-a151-9ecc42bf5e46,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.572688 kubelet[2681]: E0213 15:34:53.572659 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:53.572774 kubelet[2681]: E0213 15:34:53.572694 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hknph" Feb 13 15:34:53.572774 kubelet[2681]: E0213 15:34:53.572709 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hknph" Feb 13 15:34:53.572774 kubelet[2681]: E0213 15:34:53.572745 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hknph_kube-system(5cb650da-ab42-472c-a151-9ecc42bf5e46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hknph_kube-system(5cb650da-ab42-472c-a151-9ecc42bf5e46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hknph" podUID="5cb650da-ab42-472c-a151-9ecc42bf5e46" Feb 13 15:34:54.296360 kubelet[2681]: I0213 15:34:54.296308 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53" Feb 13 15:34:54.296936 containerd[1466]: time="2025-02-13T15:34:54.296899344Z" level=info msg="StopPodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\"" Feb 13 15:34:54.297243 containerd[1466]: time="2025-02-13T15:34:54.297159312Z" level=info msg="Ensure that sandbox bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53 in task-service has been cleanup successfully" Feb 13 15:34:54.297526 containerd[1466]: time="2025-02-13T15:34:54.297361932Z" level=info msg="TearDown network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" successfully" Feb 13 15:34:54.297526 containerd[1466]: time="2025-02-13T15:34:54.297376760Z" level=info msg="StopPodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" returns successfully" Feb 13 15:34:54.297898 containerd[1466]: time="2025-02-13T15:34:54.297873581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-6nms6,Uid:b07b9bcf-86ee-434f-97e1-b4c13955f24e,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:34:54.298274 kubelet[2681]: I0213 15:34:54.298250 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607" Feb 13 15:34:54.298924 containerd[1466]: time="2025-02-13T15:34:54.298887764Z" level=info msg="StopPodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\"" Feb 13 15:34:54.299089 containerd[1466]: time="2025-02-13T15:34:54.299067441Z" level=info msg="Ensure that sandbox 25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607 in task-service has been cleanup successfully" Feb 13 15:34:54.299338 containerd[1466]: time="2025-02-13T15:34:54.299280952Z" level=info msg="TearDown network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" successfully" Feb 13 15:34:54.299338 containerd[1466]: time="2025-02-13T15:34:54.299299306Z" level=info msg="StopPodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" returns successfully" Feb 13 15:34:54.299534 kubelet[2681]: E0213 15:34:54.299512 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:54.300641 containerd[1466]: time="2025-02-13T15:34:54.300611799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hknph,Uid:5cb650da-ab42-472c-a151-9ecc42bf5e46,Namespace:kube-system,Attempt:1,}" Feb 13 15:34:54.301060 kubelet[2681]: I0213 15:34:54.301018 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac" Feb 13 15:34:54.301587 containerd[1466]: time="2025-02-13T15:34:54.301562783Z" level=info msg="StopPodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\"" Feb 13 15:34:54.302066 containerd[1466]: time="2025-02-13T15:34:54.302045939Z" level=info msg="Ensure that sandbox cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac in task-service has been cleanup successfully" Feb 13 15:34:54.302382 containerd[1466]: time="2025-02-13T15:34:54.302360480Z" level=info msg="TearDown network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" successfully" Feb 13 15:34:54.302432 containerd[1466]: time="2025-02-13T15:34:54.302382441Z" level=info msg="StopPodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" returns successfully" Feb 13 15:34:54.302893 kubelet[2681]: I0213 15:34:54.302614 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e" Feb 13 15:34:54.303071 containerd[1466]: time="2025-02-13T15:34:54.303034143Z" level=info msg="StopPodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\"" Feb 13 15:34:54.303280 containerd[1466]: time="2025-02-13T15:34:54.303256951Z" level=info msg="Ensure that sandbox a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e in task-service has been cleanup successfully" Feb 13 15:34:54.303562 containerd[1466]: time="2025-02-13T15:34:54.303527099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-826sp,Uid:866f7a22-7bcf-4b9e-a920-3e5d167eac08,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:34:54.304360 containerd[1466]: time="2025-02-13T15:34:54.303891552Z" level=info msg="TearDown network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" successfully" Feb 13 15:34:54.304360 containerd[1466]: time="2025-02-13T15:34:54.303910999Z" level=info msg="StopPodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" returns successfully" Feb 13 15:34:54.304528 containerd[1466]: time="2025-02-13T15:34:54.304468755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxsjw,Uid:0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c,Namespace:calico-system,Attempt:1,}" Feb 13 15:34:54.304897 kubelet[2681]: I0213 15:34:54.304872 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5" Feb 13 15:34:54.305453 containerd[1466]: time="2025-02-13T15:34:54.305415742Z" level=info msg="StopPodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\"" Feb 13 15:34:54.305733 containerd[1466]: time="2025-02-13T15:34:54.305603544Z" level=info msg="Ensure that sandbox 3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5 in task-service has been cleanup successfully" Feb 13 15:34:54.305779 containerd[1466]: time="2025-02-13T15:34:54.305762732Z" level=info msg="TearDown network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" successfully" Feb 13 15:34:54.305779 containerd[1466]: time="2025-02-13T15:34:54.305773172Z" level=info msg="StopPodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" returns successfully" Feb 13 15:34:54.306158 kubelet[2681]: I0213 15:34:54.305919 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505" Feb 13 15:34:54.306158 kubelet[2681]: E0213 15:34:54.306058 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:54.306914 containerd[1466]: time="2025-02-13T15:34:54.306378057Z" level=info msg="StopPodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\"" Feb 13 15:34:54.306914 containerd[1466]: time="2025-02-13T15:34:54.306539380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8j5t5,Uid:a3315812-4b88-46a2-9ae1-f680abf2a097,Namespace:kube-system,Attempt:1,}" Feb 13 15:34:54.306914 containerd[1466]: time="2025-02-13T15:34:54.306595866Z" level=info msg="Ensure that sandbox 028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505 in task-service has been cleanup successfully" Feb 13 15:34:54.306914 containerd[1466]: time="2025-02-13T15:34:54.306834213Z" level=info msg="TearDown network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" successfully" Feb 13 15:34:54.306914 containerd[1466]: time="2025-02-13T15:34:54.306849181Z" level=info msg="StopPodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" returns successfully" Feb 13 15:34:54.307298 containerd[1466]: time="2025-02-13T15:34:54.307258780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68948865cb-5zwq2,Uid:9915f1a9-705a-4e2d-8b4c-287c91d00c76,Namespace:calico-system,Attempt:1,}" Feb 13 15:34:54.316863 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:51938.service - OpenSSH per-connection server daemon (10.0.0.1:51938). Feb 13 15:34:54.365968 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 51938 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:54.365876 sshd-session[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:54.375251 systemd-logind[1449]: New session 13 of user core. Feb 13 15:34:54.380733 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:34:54.429387 containerd[1466]: time="2025-02-13T15:34:54.429231516Z" level=error msg="Failed to destroy network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.430904 containerd[1466]: time="2025-02-13T15:34:54.430668582Z" level=error msg="encountered an error cleaning up failed sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.431128 containerd[1466]: time="2025-02-13T15:34:54.431096198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-6nms6,Uid:b07b9bcf-86ee-434f-97e1-b4c13955f24e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.432002 kubelet[2681]: E0213 15:34:54.431780 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.432002 kubelet[2681]: E0213 15:34:54.431860 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" Feb 13 15:34:54.432002 kubelet[2681]: E0213 15:34:54.431886 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" Feb 13 15:34:54.432116 kubelet[2681]: E0213 15:34:54.431931 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768d56694b-6nms6_calico-apiserver(b07b9bcf-86ee-434f-97e1-b4c13955f24e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768d56694b-6nms6_calico-apiserver(b07b9bcf-86ee-434f-97e1-b4c13955f24e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" podUID="b07b9bcf-86ee-434f-97e1-b4c13955f24e" Feb 13 15:34:54.433297 containerd[1466]: time="2025-02-13T15:34:54.433255624Z" level=error msg="Failed to destroy network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.433847 containerd[1466]: time="2025-02-13T15:34:54.433814866Z" level=error msg="encountered an error cleaning up failed sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.434575 containerd[1466]: time="2025-02-13T15:34:54.433879008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hknph,Uid:5cb650da-ab42-472c-a151-9ecc42bf5e46,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.435208 kubelet[2681]: E0213 15:34:54.435174 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.435269 kubelet[2681]: E0213 15:34:54.435223 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hknph" Feb 13 15:34:54.435269 kubelet[2681]: E0213 15:34:54.435242 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hknph" Feb 13 15:34:54.435314 kubelet[2681]: E0213 15:34:54.435287 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hknph_kube-system(5cb650da-ab42-472c-a151-9ecc42bf5e46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hknph_kube-system(5cb650da-ab42-472c-a151-9ecc42bf5e46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hknph" podUID="5cb650da-ab42-472c-a151-9ecc42bf5e46" Feb 13 15:34:54.446094 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac-shm.mount: Deactivated successfully. Feb 13 15:34:54.446198 systemd[1]: run-netns-cni\x2dba3b0919\x2dff70\x2d35f9\x2d89a4\x2d58d09d7a12c5.mount: Deactivated successfully. Feb 13 15:34:54.446270 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e-shm.mount: Deactivated successfully. Feb 13 15:34:54.446343 systemd[1]: run-netns-cni\x2dc43d78c7\x2d7e10\x2d1730\x2debbb\x2d08078ec326b3.mount: Deactivated successfully. Feb 13 15:34:54.446414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505-shm.mount: Deactivated successfully. Feb 13 15:34:54.447124 containerd[1466]: time="2025-02-13T15:34:54.446970459Z" level=error msg="Failed to destroy network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.447718 containerd[1466]: time="2025-02-13T15:34:54.447688741Z" level=error msg="encountered an error cleaning up failed sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.447927 containerd[1466]: time="2025-02-13T15:34:54.447822783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxsjw,Uid:0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.448496 kubelet[2681]: E0213 15:34:54.448338 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.448496 kubelet[2681]: E0213 15:34:54.448393 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:54.448496 kubelet[2681]: E0213 15:34:54.448413 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:34:54.449504 kubelet[2681]: E0213 15:34:54.448720 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kxsjw_calico-system(0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kxsjw_calico-system(0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:34:54.451141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66-shm.mount: Deactivated successfully. Feb 13 15:34:54.451237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4-shm.mount: Deactivated successfully. Feb 13 15:34:54.475216 containerd[1466]: time="2025-02-13T15:34:54.475163534Z" level=error msg="Failed to destroy network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.477899 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8-shm.mount: Deactivated successfully. Feb 13 15:34:54.479530 containerd[1466]: time="2025-02-13T15:34:54.479192721Z" level=error msg="encountered an error cleaning up failed sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.479530 containerd[1466]: time="2025-02-13T15:34:54.479361559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-826sp,Uid:866f7a22-7bcf-4b9e-a920-3e5d167eac08,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.479920 kubelet[2681]: E0213 15:34:54.479882 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.480434 kubelet[2681]: E0213 15:34:54.480071 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" Feb 13 15:34:54.480434 kubelet[2681]: E0213 15:34:54.480094 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" Feb 13 15:34:54.480434 kubelet[2681]: E0213 15:34:54.480143 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768d56694b-826sp_calico-apiserver(866f7a22-7bcf-4b9e-a920-3e5d167eac08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768d56694b-826sp_calico-apiserver(866f7a22-7bcf-4b9e-a920-3e5d167eac08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" podUID="866f7a22-7bcf-4b9e-a920-3e5d167eac08" Feb 13 15:34:54.481779 containerd[1466]: time="2025-02-13T15:34:54.481756770Z" level=error msg="Failed to destroy network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.484687 containerd[1466]: time="2025-02-13T15:34:54.483789337Z" level=error msg="encountered an error cleaning up failed sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.484687 containerd[1466]: time="2025-02-13T15:34:54.483847827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68948865cb-5zwq2,Uid:9915f1a9-705a-4e2d-8b4c-287c91d00c76,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.485770 kubelet[2681]: E0213 15:34:54.484446 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.485770 kubelet[2681]: E0213 15:34:54.484559 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:34:54.485770 kubelet[2681]: E0213 15:34:54.484586 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:34:54.485395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e-shm.mount: Deactivated successfully. Feb 13 15:34:54.485913 kubelet[2681]: E0213 15:34:54.484632 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68948865cb-5zwq2_calico-system(9915f1a9-705a-4e2d-8b4c-287c91d00c76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68948865cb-5zwq2_calico-system(9915f1a9-705a-4e2d-8b4c-287c91d00c76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" podUID="9915f1a9-705a-4e2d-8b4c-287c91d00c76" Feb 13 15:34:54.489509 containerd[1466]: time="2025-02-13T15:34:54.487575226Z" level=error msg="Failed to destroy network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.489509 containerd[1466]: time="2025-02-13T15:34:54.488083653Z" level=error msg="encountered an error cleaning up failed sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.489509 containerd[1466]: time="2025-02-13T15:34:54.488185575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8j5t5,Uid:a3315812-4b88-46a2-9ae1-f680abf2a097,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.489682 kubelet[2681]: E0213 15:34:54.488449 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:34:54.489682 kubelet[2681]: E0213 15:34:54.488530 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8j5t5" Feb 13 15:34:54.489682 kubelet[2681]: E0213 15:34:54.488553 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8j5t5" Feb 13 15:34:54.489804 kubelet[2681]: E0213 15:34:54.488609 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8j5t5_kube-system(a3315812-4b88-46a2-9ae1-f680abf2a097)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8j5t5_kube-system(a3315812-4b88-46a2-9ae1-f680abf2a097)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8j5t5" podUID="a3315812-4b88-46a2-9ae1-f680abf2a097" Feb 13 15:34:54.491106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05-shm.mount: Deactivated successfully. Feb 13 15:34:54.513522 sshd[3938]: Connection closed by 10.0.0.1 port 51938 Feb 13 15:34:54.513907 sshd-session[3849]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:54.517964 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:51938.service: Deactivated successfully. Feb 13 15:34:54.520063 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:34:54.520646 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:34:54.521499 systemd-logind[1449]: Removed session 13. Feb 13 15:34:55.308658 kubelet[2681]: I0213 15:34:55.308620 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05" Feb 13 15:34:55.309283 containerd[1466]: time="2025-02-13T15:34:55.309253491Z" level=info msg="StopPodSandbox for \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\"" Feb 13 15:34:55.309563 containerd[1466]: time="2025-02-13T15:34:55.309452928Z" level=info msg="Ensure that sandbox 91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05 in task-service has been cleanup successfully" Feb 13 15:34:55.309756 containerd[1466]: time="2025-02-13T15:34:55.309730585Z" level=info msg="TearDown network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\" successfully" Feb 13 15:34:55.309756 containerd[1466]: time="2025-02-13T15:34:55.309748440Z" level=info msg="StopPodSandbox for \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\" returns successfully" Feb 13 15:34:55.310118 containerd[1466]: time="2025-02-13T15:34:55.310070684Z" level=info msg="StopPodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\"" Feb 13 15:34:55.310234 containerd[1466]: time="2025-02-13T15:34:55.310180316Z" level=info msg="TearDown network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" successfully" Feb 13 15:34:55.310234 containerd[1466]: time="2025-02-13T15:34:55.310194042Z" level=info msg="StopPodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" returns successfully" Feb 13 15:34:55.310441 kubelet[2681]: E0213 15:34:55.310417 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:55.310696 kubelet[2681]: I0213 15:34:55.310679 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e" Feb 13 15:34:55.311139 containerd[1466]: time="2025-02-13T15:34:55.311111029Z" level=info msg="StopPodSandbox for \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\"" Feb 13 15:34:55.311182 containerd[1466]: time="2025-02-13T15:34:55.311169081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8j5t5,Uid:a3315812-4b88-46a2-9ae1-f680abf2a097,Namespace:kube-system,Attempt:2,}" Feb 13 15:34:55.311293 containerd[1466]: time="2025-02-13T15:34:55.311256972Z" level=info msg="Ensure that sandbox e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e in task-service has been cleanup successfully" Feb 13 15:34:55.311440 containerd[1466]: time="2025-02-13T15:34:55.311406171Z" level=info msg="TearDown network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\" successfully" Feb 13 15:34:55.311440 containerd[1466]: time="2025-02-13T15:34:55.311427752Z" level=info msg="StopPodSandbox for \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\" returns successfully" Feb 13 15:34:55.311987 containerd[1466]: time="2025-02-13T15:34:55.311946747Z" level=info msg="StopPodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\"" Feb 13 15:34:55.312067 containerd[1466]: time="2025-02-13T15:34:55.312024127Z" level=info msg="TearDown network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" successfully" Feb 13 15:34:55.312067 containerd[1466]: time="2025-02-13T15:34:55.312040008Z" level=info msg="StopPodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" returns successfully" Feb 13 15:34:55.312580 containerd[1466]: time="2025-02-13T15:34:55.312534466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68948865cb-5zwq2,Uid:9915f1a9-705a-4e2d-8b4c-287c91d00c76,Namespace:calico-system,Attempt:2,}" Feb 13 15:34:55.312841 kubelet[2681]: I0213 15:34:55.312814 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4" Feb 13 15:34:55.313315 containerd[1466]: time="2025-02-13T15:34:55.313258278Z" level=info msg="StopPodSandbox for \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\"" Feb 13 15:34:55.313427 containerd[1466]: time="2025-02-13T15:34:55.313406154Z" level=info msg="Ensure that sandbox 619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4 in task-service has been cleanup successfully" Feb 13 15:34:55.313759 containerd[1466]: time="2025-02-13T15:34:55.313734730Z" level=info msg="TearDown network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\" successfully" Feb 13 15:34:55.313859 containerd[1466]: time="2025-02-13T15:34:55.313812040Z" level=info msg="StopPodSandbox for \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\" returns successfully" Feb 13 15:34:55.314112 containerd[1466]: time="2025-02-13T15:34:55.314091030Z" level=info msg="StopPodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\"" Feb 13 15:34:55.314193 containerd[1466]: time="2025-02-13T15:34:55.314177949Z" level=info msg="TearDown network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" successfully" Feb 13 15:34:55.314193 containerd[1466]: time="2025-02-13T15:34:55.314187217Z" level=info msg="StopPodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" returns successfully" Feb 13 15:34:55.314375 kubelet[2681]: I0213 15:34:55.314311 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8" Feb 13 15:34:55.314375 kubelet[2681]: E0213 15:34:55.314311 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:34:55.315311 containerd[1466]: time="2025-02-13T15:34:55.314669491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hknph,Uid:5cb650da-ab42-472c-a151-9ecc42bf5e46,Namespace:kube-system,Attempt:2,}" Feb 13 15:34:55.315311 containerd[1466]: time="2025-02-13T15:34:55.315265957Z" level=info msg="StopPodSandbox for \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\"" Feb 13 15:34:55.315632 containerd[1466]: time="2025-02-13T15:34:55.315526221Z" level=info msg="Ensure that sandbox b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8 in task-service has been cleanup successfully" Feb 13 15:34:55.315728 kubelet[2681]: I0213 15:34:55.315702 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66" Feb 13 15:34:55.315763 containerd[1466]: time="2025-02-13T15:34:55.315734464Z" level=info msg="TearDown network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\" successfully" Feb 13 15:34:55.315763 containerd[1466]: time="2025-02-13T15:34:55.315749422Z" level=info msg="StopPodSandbox for \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\" returns successfully" Feb 13 15:34:55.316096 containerd[1466]: time="2025-02-13T15:34:55.316074071Z" level=info msg="StopPodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\"" Feb 13 15:34:55.316130 containerd[1466]: time="2025-02-13T15:34:55.316089290Z" level=info msg="StopPodSandbox for \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\"" Feb 13 15:34:55.316178 containerd[1466]: time="2025-02-13T15:34:55.316160188Z" level=info msg="TearDown network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" successfully" Feb 13 15:34:55.316178 containerd[1466]: time="2025-02-13T15:34:55.316171600Z" level=info msg="StopPodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" returns successfully" Feb 13 15:34:55.316756 containerd[1466]: time="2025-02-13T15:34:55.316265021Z" level=info msg="Ensure that sandbox ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66 in task-service has been cleanup successfully" Feb 13 15:34:55.316756 containerd[1466]: time="2025-02-13T15:34:55.316717578Z" level=info msg="TearDown network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\" successfully" Feb 13 15:34:55.316756 containerd[1466]: time="2025-02-13T15:34:55.316733759Z" level=info msg="StopPodSandbox for \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\" returns successfully" Feb 13 15:34:55.317297 containerd[1466]: time="2025-02-13T15:34:55.316811550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-826sp,Uid:866f7a22-7bcf-4b9e-a920-3e5d167eac08,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:34:55.317556 containerd[1466]: time="2025-02-13T15:34:55.317392175Z" level=info msg="StopPodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\"" Feb 13 15:34:55.317556 containerd[1466]: time="2025-02-13T15:34:55.317468863Z" level=info msg="TearDown network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" successfully" Feb 13 15:34:55.317556 containerd[1466]: time="2025-02-13T15:34:55.317505264Z" level=info msg="StopPodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" returns successfully" Feb 13 15:34:55.317646 kubelet[2681]: I0213 15:34:55.317518 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d" Feb 13 15:34:55.319173 containerd[1466]: time="2025-02-13T15:34:55.318898401Z" level=info msg="StopPodSandbox for \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\"" Feb 13 15:34:55.319173 containerd[1466]: time="2025-02-13T15:34:55.318995329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxsjw,Uid:0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c,Namespace:calico-system,Attempt:2,}" Feb 13 15:34:55.319173 containerd[1466]: time="2025-02-13T15:34:55.319057840Z" level=info msg="Ensure that sandbox a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d in task-service has been cleanup successfully" Feb 13 15:34:55.319331 containerd[1466]: time="2025-02-13T15:34:55.319314578Z" level=info msg="TearDown network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\" successfully" Feb 13 15:34:55.319384 containerd[1466]: time="2025-02-13T15:34:55.319367410Z" level=info msg="StopPodSandbox for \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\" returns successfully" Feb 13 15:34:55.319594 containerd[1466]: time="2025-02-13T15:34:55.319571275Z" level=info msg="StopPodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\"" Feb 13 15:34:55.319660 containerd[1466]: time="2025-02-13T15:34:55.319645879Z" level=info msg="TearDown network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" successfully" Feb 13 15:34:55.319700 containerd[1466]: time="2025-02-13T15:34:55.319659145Z" level=info msg="StopPodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" returns successfully" Feb 13 15:34:55.319967 containerd[1466]: time="2025-02-13T15:34:55.319941592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-6nms6,Uid:b07b9bcf-86ee-434f-97e1-b4c13955f24e,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:34:55.433245 systemd[1]: run-netns-cni\x2d95d50656\x2d5f76\x2d414f\x2d9d34\x2d59b622e91ffc.mount: Deactivated successfully. Feb 13 15:34:55.433368 systemd[1]: run-netns-cni\x2d1d68bf48\x2d9848\x2dca1c\x2dc68d\x2dd479b56b5212.mount: Deactivated successfully. Feb 13 15:34:55.433440 systemd[1]: run-netns-cni\x2d7f0ce232\x2d4f60\x2d85b8\x2de5c8\x2d26bb77e40294.mount: Deactivated successfully. Feb 13 15:34:55.433525 systemd[1]: run-netns-cni\x2dc3ba8180\x2d08ad\x2d603b\x2d340b\x2dbb1808f0e1b9.mount: Deactivated successfully. Feb 13 15:34:55.433601 systemd[1]: run-netns-cni\x2dc1693b07\x2df2ef\x2d8531\x2d6031\x2d2b9afe311d0d.mount: Deactivated successfully. Feb 13 15:34:55.433669 systemd[1]: run-netns-cni\x2d4c44fbcc\x2d7e0a\x2db770\x2daa2f\x2d5ac54183161b.mount: Deactivated successfully. Feb 13 15:34:59.417678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225466706.mount: Deactivated successfully. Feb 13 15:34:59.525765 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:45526.service - OpenSSH per-connection server daemon (10.0.0.1:45526). Feb 13 15:34:59.962664 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 45526 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:34:59.965631 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:59.970714 systemd-logind[1449]: New session 14 of user core. Feb 13 15:34:59.975607 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:35:00.014841 containerd[1466]: time="2025-02-13T15:35:00.014765876Z" level=error msg="Failed to destroy network for sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.017426 containerd[1466]: time="2025-02-13T15:35:00.017374358Z" level=error msg="encountered an error cleaning up failed sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.017968 containerd[1466]: time="2025-02-13T15:35:00.017517222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxsjw,Uid:0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.018018 kubelet[2681]: E0213 15:35:00.017774 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.018018 kubelet[2681]: E0213 15:35:00.017838 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:35:00.018018 kubelet[2681]: E0213 15:35:00.017871 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kxsjw" Feb 13 15:35:00.018550 kubelet[2681]: E0213 15:35:00.017912 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kxsjw_calico-system(0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kxsjw_calico-system(0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kxsjw" podUID="0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c" Feb 13 15:35:00.028348 containerd[1466]: time="2025-02-13T15:35:00.028071921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:00.031302 containerd[1466]: time="2025-02-13T15:35:00.031215755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:35:00.032704 containerd[1466]: time="2025-02-13T15:35:00.032669038Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:00.038256 containerd[1466]: time="2025-02-13T15:35:00.038198383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:00.039618 containerd[1466]: time="2025-02-13T15:35:00.039566041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.745004071s" Feb 13 15:35:00.042145 containerd[1466]: time="2025-02-13T15:35:00.042105890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:35:00.054114 containerd[1466]: time="2025-02-13T15:35:00.054047682Z" level=info msg="CreateContainer within sandbox \"36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:35:00.100117 containerd[1466]: time="2025-02-13T15:35:00.100071173Z" level=info msg="CreateContainer within sandbox \"36fe746900ab063ded2013e37253db018430ac8813c9e37629160787767193ed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c159732440ace52a3c3f35a382d05f48339b0f24e474533a73e5aaf4944f9eba\"" Feb 13 15:35:00.102496 containerd[1466]: time="2025-02-13T15:35:00.101917454Z" level=info msg="StartContainer for \"c159732440ace52a3c3f35a382d05f48339b0f24e474533a73e5aaf4944f9eba\"" Feb 13 15:35:00.116622 containerd[1466]: time="2025-02-13T15:35:00.116216163Z" level=error msg="Failed to destroy network for sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.118801 containerd[1466]: time="2025-02-13T15:35:00.118433310Z" level=error msg="encountered an error cleaning up failed sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.118801 containerd[1466]: time="2025-02-13T15:35:00.118704583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8j5t5,Uid:a3315812-4b88-46a2-9ae1-f680abf2a097,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.119100 containerd[1466]: time="2025-02-13T15:35:00.119042635Z" level=error msg="Failed to destroy network for sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.119575 kubelet[2681]: E0213 15:35:00.119527 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.119637 kubelet[2681]: E0213 15:35:00.119594 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8j5t5" Feb 13 15:35:00.119637 kubelet[2681]: E0213 15:35:00.119617 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8j5t5" Feb 13 15:35:00.119683 kubelet[2681]: E0213 15:35:00.119659 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8j5t5_kube-system(a3315812-4b88-46a2-9ae1-f680abf2a097)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8j5t5_kube-system(a3315812-4b88-46a2-9ae1-f680abf2a097)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8j5t5" podUID="a3315812-4b88-46a2-9ae1-f680abf2a097" Feb 13 15:35:00.120318 containerd[1466]: time="2025-02-13T15:35:00.120276686Z" level=error msg="encountered an error cleaning up failed sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.120397 containerd[1466]: time="2025-02-13T15:35:00.120326331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-826sp,Uid:866f7a22-7bcf-4b9e-a920-3e5d167eac08,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.120746 kubelet[2681]: E0213 15:35:00.120462 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.120746 kubelet[2681]: E0213 15:35:00.120543 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" Feb 13 15:35:00.120746 kubelet[2681]: E0213 15:35:00.120559 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" Feb 13 15:35:00.120846 kubelet[2681]: E0213 15:35:00.120587 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768d56694b-826sp_calico-apiserver(866f7a22-7bcf-4b9e-a920-3e5d167eac08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768d56694b-826sp_calico-apiserver(866f7a22-7bcf-4b9e-a920-3e5d167eac08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" podUID="866f7a22-7bcf-4b9e-a920-3e5d167eac08" Feb 13 15:35:00.132842 sshd[4131]: Connection closed by 10.0.0.1 port 45526 Feb 13 15:35:00.133784 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:00.137180 containerd[1466]: time="2025-02-13T15:35:00.137058244Z" level=error msg="Failed to destroy network for sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.137485 containerd[1466]: time="2025-02-13T15:35:00.137444629Z" level=error msg="encountered an error cleaning up failed sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.137541 containerd[1466]: time="2025-02-13T15:35:00.137521467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68948865cb-5zwq2,Uid:9915f1a9-705a-4e2d-8b4c-287c91d00c76,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.137758 kubelet[2681]: E0213 15:35:00.137716 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.137803 kubelet[2681]: E0213 15:35:00.137774 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:35:00.137803 kubelet[2681]: E0213 15:35:00.137794 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" Feb 13 15:35:00.137855 kubelet[2681]: E0213 15:35:00.137833 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68948865cb-5zwq2_calico-system(9915f1a9-705a-4e2d-8b4c-287c91d00c76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68948865cb-5zwq2_calico-system(9915f1a9-705a-4e2d-8b4c-287c91d00c76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" podUID="9915f1a9-705a-4e2d-8b4c-287c91d00c76" Feb 13 15:35:00.144174 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:45526.service: Deactivated successfully. Feb 13 15:35:00.148105 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:35:00.148755 containerd[1466]: time="2025-02-13T15:35:00.148664991Z" level=error msg="Failed to destroy network for sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.149414 containerd[1466]: time="2025-02-13T15:35:00.149314022Z" level=error msg="encountered an error cleaning up failed sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.149414 containerd[1466]: time="2025-02-13T15:35:00.149387474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hknph,Uid:5cb650da-ab42-472c-a151-9ecc42bf5e46,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.151444 kubelet[2681]: E0213 15:35:00.149734 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.151444 kubelet[2681]: E0213 15:35:00.149789 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hknph" Feb 13 15:35:00.151444 kubelet[2681]: E0213 15:35:00.149881 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hknph" Feb 13 15:35:00.151180 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:35:00.151637 kubelet[2681]: E0213 15:35:00.149939 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hknph_kube-system(5cb650da-ab42-472c-a151-9ecc42bf5e46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hknph_kube-system(5cb650da-ab42-472c-a151-9ecc42bf5e46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hknph" podUID="5cb650da-ab42-472c-a151-9ecc42bf5e46" Feb 13 15:35:00.156207 containerd[1466]: time="2025-02-13T15:35:00.156157563Z" level=error msg="Failed to destroy network for sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.156840 containerd[1466]: time="2025-02-13T15:35:00.156807637Z" level=error msg="encountered an error cleaning up failed sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.157389 containerd[1466]: time="2025-02-13T15:35:00.156954940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-6nms6,Uid:b07b9bcf-86ee-434f-97e1-b4c13955f24e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.157470 kubelet[2681]: E0213 15:35:00.157108 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:35:00.157470 kubelet[2681]: E0213 15:35:00.157155 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" Feb 13 15:35:00.157470 kubelet[2681]: E0213 15:35:00.157173 2681 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" Feb 13 15:35:00.158803 kubelet[2681]: E0213 15:35:00.157207 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768d56694b-6nms6_calico-apiserver(b07b9bcf-86ee-434f-97e1-b4c13955f24e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768d56694b-6nms6_calico-apiserver(b07b9bcf-86ee-434f-97e1-b4c13955f24e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" podUID="b07b9bcf-86ee-434f-97e1-b4c13955f24e" Feb 13 15:35:00.159620 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:45532.service - OpenSSH per-connection server daemon (10.0.0.1:45532). Feb 13 15:35:00.161208 systemd-logind[1449]: Removed session 14. Feb 13 15:35:00.198683 systemd[1]: Started cri-containerd-c159732440ace52a3c3f35a382d05f48339b0f24e474533a73e5aaf4944f9eba.scope - libcontainer container c159732440ace52a3c3f35a382d05f48339b0f24e474533a73e5aaf4944f9eba. Feb 13 15:35:00.199070 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 45532 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:00.200841 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:00.206749 systemd-logind[1449]: New session 15 of user core. Feb 13 15:35:00.215618 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:35:00.237780 containerd[1466]: time="2025-02-13T15:35:00.237729919Z" level=info msg="StartContainer for \"c159732440ace52a3c3f35a382d05f48339b0f24e474533a73e5aaf4944f9eba\" returns successfully" Feb 13 15:35:00.304004 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:35:00.304175 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:35:00.361670 kubelet[2681]: E0213 15:35:00.358938 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:00.371119 kubelet[2681]: I0213 15:35:00.371071 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5" Feb 13 15:35:00.372262 containerd[1466]: time="2025-02-13T15:35:00.372212689Z" level=info msg="StopPodSandbox for \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\"" Feb 13 15:35:00.372463 containerd[1466]: time="2025-02-13T15:35:00.372436702Z" level=info msg="Ensure that sandbox 5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5 in task-service has been cleanup successfully" Feb 13 15:35:00.372709 containerd[1466]: time="2025-02-13T15:35:00.372686845Z" level=info msg="TearDown network for sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\" successfully" Feb 13 15:35:00.372709 containerd[1466]: time="2025-02-13T15:35:00.372703055Z" level=info msg="StopPodSandbox for \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\" returns successfully" Feb 13 15:35:00.375982 containerd[1466]: time="2025-02-13T15:35:00.375735204Z" level=info msg="StopPodSandbox for \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\"" Feb 13 15:35:00.375982 containerd[1466]: time="2025-02-13T15:35:00.375876867Z" level=info msg="TearDown network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\" successfully" Feb 13 15:35:00.375982 containerd[1466]: time="2025-02-13T15:35:00.375922806Z" level=info msg="StopPodSandbox for \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\" returns successfully" Feb 13 15:35:00.376610 containerd[1466]: time="2025-02-13T15:35:00.376576457Z" level=info msg="StopPodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\"" Feb 13 15:35:00.376787 containerd[1466]: time="2025-02-13T15:35:00.376683173Z" level=info msg="TearDown network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" successfully" Feb 13 15:35:00.376787 containerd[1466]: time="2025-02-13T15:35:00.376694624Z" level=info msg="StopPodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" returns successfully" Feb 13 15:35:00.376930 kubelet[2681]: E0213 15:35:00.376904 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:00.377241 containerd[1466]: time="2025-02-13T15:35:00.377177355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8j5t5,Uid:a3315812-4b88-46a2-9ae1-f680abf2a097,Namespace:kube-system,Attempt:3,}" Feb 13 15:35:00.378655 kubelet[2681]: I0213 15:35:00.378280 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e" Feb 13 15:35:00.378913 containerd[1466]: time="2025-02-13T15:35:00.378787711Z" level=info msg="StopPodSandbox for \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\"" Feb 13 15:35:00.379029 containerd[1466]: time="2025-02-13T15:35:00.379003278Z" level=info msg="Ensure that sandbox f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e in task-service has been cleanup successfully" Feb 13 15:35:00.379331 containerd[1466]: time="2025-02-13T15:35:00.379223553Z" level=info msg="TearDown network for sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\" successfully" Feb 13 15:35:00.379331 containerd[1466]: time="2025-02-13T15:35:00.379248741Z" level=info msg="StopPodSandbox for \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\" returns successfully" Feb 13 15:35:00.379674 containerd[1466]: time="2025-02-13T15:35:00.379632691Z" level=info msg="StopPodSandbox for \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\"" Feb 13 15:35:00.380136 containerd[1466]: time="2025-02-13T15:35:00.380119801Z" level=info msg="TearDown network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\" successfully" Feb 13 15:35:00.380313 containerd[1466]: time="2025-02-13T15:35:00.380298335Z" level=info msg="StopPodSandbox for \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\" returns successfully" Feb 13 15:35:00.381183 containerd[1466]: time="2025-02-13T15:35:00.381154607Z" level=info msg="StopPodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\"" Feb 13 15:35:00.381305 kubelet[2681]: I0213 15:35:00.381284 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b" Feb 13 15:35:00.381361 containerd[1466]: time="2025-02-13T15:35:00.381349162Z" level=info msg="TearDown network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" successfully" Feb 13 15:35:00.381387 containerd[1466]: time="2025-02-13T15:35:00.381360424Z" level=info msg="StopPodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" returns successfully" Feb 13 15:35:00.383457 containerd[1466]: time="2025-02-13T15:35:00.382808217Z" level=info msg="StopPodSandbox for \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\"" Feb 13 15:35:00.384039 containerd[1466]: time="2025-02-13T15:35:00.384000185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68948865cb-5zwq2,Uid:9915f1a9-705a-4e2d-8b4c-287c91d00c76,Namespace:calico-system,Attempt:3,}" Feb 13 15:35:00.385444 containerd[1466]: time="2025-02-13T15:35:00.385386700Z" level=info msg="Ensure that sandbox f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b in task-service has been cleanup successfully" Feb 13 15:35:00.385793 containerd[1466]: time="2025-02-13T15:35:00.385762485Z" level=info msg="TearDown network for sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\" successfully" Feb 13 15:35:00.385944 containerd[1466]: time="2025-02-13T15:35:00.385779167Z" level=info msg="StopPodSandbox for \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\" returns successfully" Feb 13 15:35:00.386199 kubelet[2681]: I0213 15:35:00.386089 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862" Feb 13 15:35:00.386397 containerd[1466]: time="2025-02-13T15:35:00.386372320Z" level=info msg="StopPodSandbox for \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\"" Feb 13 15:35:00.386830 containerd[1466]: time="2025-02-13T15:35:00.386678882Z" level=info msg="TearDown network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\" successfully" Feb 13 15:35:00.386830 containerd[1466]: time="2025-02-13T15:35:00.386724951Z" level=info msg="StopPodSandbox for \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\" returns successfully" Feb 13 15:35:00.386830 containerd[1466]: time="2025-02-13T15:35:00.386813422Z" level=info msg="StopPodSandbox for \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\"" Feb 13 15:35:00.387279 containerd[1466]: time="2025-02-13T15:35:00.387208163Z" level=info msg="Ensure that sandbox 8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862 in task-service has been cleanup successfully" Feb 13 15:35:00.387555 containerd[1466]: time="2025-02-13T15:35:00.387214214Z" level=info msg="StopPodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\"" Feb 13 15:35:00.387653 containerd[1466]: time="2025-02-13T15:35:00.387627232Z" level=info msg="TearDown network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" successfully" Feb 13 15:35:00.387653 containerd[1466]: time="2025-02-13T15:35:00.387640787Z" level=info msg="StopPodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" returns successfully" Feb 13 15:35:00.387870 containerd[1466]: time="2025-02-13T15:35:00.387788322Z" level=info msg="TearDown network for sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\" successfully" Feb 13 15:35:00.387870 containerd[1466]: time="2025-02-13T15:35:00.387804012Z" level=info msg="StopPodSandbox for \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\" returns successfully" Feb 13 15:35:00.387965 containerd[1466]: time="2025-02-13T15:35:00.387944413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-6nms6,Uid:b07b9bcf-86ee-434f-97e1-b4c13955f24e,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:35:00.388612 containerd[1466]: time="2025-02-13T15:35:00.388592593Z" level=info msg="StopPodSandbox for \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\"" Feb 13 15:35:00.388678 containerd[1466]: time="2025-02-13T15:35:00.388662117Z" level=info msg="TearDown network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\" successfully" Feb 13 15:35:00.388678 containerd[1466]: time="2025-02-13T15:35:00.388674952Z" level=info msg="StopPodSandbox for \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\" returns successfully" Feb 13 15:35:00.389312 containerd[1466]: time="2025-02-13T15:35:00.389287072Z" level=info msg="StopPodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\"" Feb 13 15:35:00.389380 containerd[1466]: time="2025-02-13T15:35:00.389360173Z" level=info msg="TearDown network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" successfully" Feb 13 15:35:00.389380 containerd[1466]: time="2025-02-13T15:35:00.389373299Z" level=info msg="StopPodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" returns successfully" Feb 13 15:35:00.389798 kubelet[2681]: E0213 15:35:00.389771 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:00.390385 containerd[1466]: time="2025-02-13T15:35:00.390363819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hknph,Uid:5cb650da-ab42-472c-a151-9ecc42bf5e46,Namespace:kube-system,Attempt:3,}" Feb 13 15:35:00.390652 kubelet[2681]: I0213 15:35:00.390627 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66" Feb 13 15:35:00.391056 containerd[1466]: time="2025-02-13T15:35:00.391030085Z" level=info msg="StopPodSandbox for \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\"" Feb 13 15:35:00.391200 containerd[1466]: time="2025-02-13T15:35:00.391178662Z" level=info msg="Ensure that sandbox 8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66 in task-service has been cleanup successfully" Feb 13 15:35:00.391586 containerd[1466]: time="2025-02-13T15:35:00.391561349Z" level=info msg="TearDown network for sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\" successfully" Feb 13 15:35:00.391586 containerd[1466]: time="2025-02-13T15:35:00.391578502Z" level=info msg="StopPodSandbox for \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\" returns successfully" Feb 13 15:35:00.392313 containerd[1466]: time="2025-02-13T15:35:00.392271910Z" level=info msg="StopPodSandbox for \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\"" Feb 13 15:35:00.392782 containerd[1466]: time="2025-02-13T15:35:00.392691660Z" level=info msg="TearDown network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\" successfully" Feb 13 15:35:00.392782 containerd[1466]: time="2025-02-13T15:35:00.392707651Z" level=info msg="StopPodSandbox for \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\" returns successfully" Feb 13 15:35:00.394134 containerd[1466]: time="2025-02-13T15:35:00.393697480Z" level=info msg="StopPodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\"" Feb 13 15:35:00.394134 containerd[1466]: time="2025-02-13T15:35:00.393768887Z" level=info msg="TearDown network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" successfully" Feb 13 15:35:00.394134 containerd[1466]: time="2025-02-13T15:35:00.393777464Z" level=info msg="StopPodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" returns successfully" Feb 13 15:35:00.394647 containerd[1466]: time="2025-02-13T15:35:00.394511019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-826sp,Uid:866f7a22-7bcf-4b9e-a920-3e5d167eac08,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:35:00.395289 kubelet[2681]: I0213 15:35:00.394767 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525" Feb 13 15:35:00.395348 containerd[1466]: time="2025-02-13T15:35:00.395213344Z" level=info msg="StopPodSandbox for \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\"" Feb 13 15:35:00.400126 containerd[1466]: time="2025-02-13T15:35:00.400010476Z" level=info msg="Ensure that sandbox 931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525 in task-service has been cleanup successfully" Feb 13 15:35:00.400273 containerd[1466]: time="2025-02-13T15:35:00.400258004Z" level=info msg="TearDown network for sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\" successfully" Feb 13 15:35:00.400341 containerd[1466]: time="2025-02-13T15:35:00.400319913Z" level=info msg="StopPodSandbox for \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\" returns successfully" Feb 13 15:35:00.400946 containerd[1466]: time="2025-02-13T15:35:00.400760934Z" level=info msg="StopPodSandbox for \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\"" Feb 13 15:35:00.400946 containerd[1466]: time="2025-02-13T15:35:00.400913177Z" level=info msg="TearDown network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\" successfully" Feb 13 15:35:00.400946 containerd[1466]: time="2025-02-13T15:35:00.400923818Z" level=info msg="StopPodSandbox for \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\" returns successfully" Feb 13 15:35:00.401276 containerd[1466]: time="2025-02-13T15:35:00.401246380Z" level=info msg="StopPodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\"" Feb 13 15:35:00.420182 systemd[1]: run-netns-cni\x2dd8f183ef\x2d1ac8\x2d3879\x2d8844\x2dd4674125662e.mount: Deactivated successfully. Feb 13 15:35:00.420836 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66-shm.mount: Deactivated successfully. Feb 13 15:35:00.420929 systemd[1]: run-netns-cni\x2d33a291ac\x2d41bf\x2db9e6\x2dcb34\x2d042d4aaaf725.mount: Deactivated successfully. Feb 13 15:35:00.421001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862-shm.mount: Deactivated successfully. Feb 13 15:35:00.421072 systemd[1]: run-netns-cni\x2db7f32fe1\x2db158\x2d4141\x2df633\x2dc9b30d8d0cd5.mount: Deactivated successfully. Feb 13 15:35:00.421140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e-shm.mount: Deactivated successfully. Feb 13 15:35:00.421226 systemd[1]: run-netns-cni\x2d1350f0b7\x2dd539\x2d4e94\x2d3a51\x2def623199dc59.mount: Deactivated successfully. Feb 13 15:35:00.421296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5-shm.mount: Deactivated successfully. Feb 13 15:35:00.421365 systemd[1]: run-netns-cni\x2d8bb4a772\x2d53a7\x2d1feb\x2d353b\x2dd4fa8e9d807f.mount: Deactivated successfully. Feb 13 15:35:00.421440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525-shm.mount: Deactivated successfully. Feb 13 15:35:00.468022 containerd[1466]: time="2025-02-13T15:35:00.467899671Z" level=info msg="TearDown network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" successfully" Feb 13 15:35:00.468521 containerd[1466]: time="2025-02-13T15:35:00.468147839Z" level=info msg="StopPodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" returns successfully" Feb 13 15:35:00.469053 containerd[1466]: time="2025-02-13T15:35:00.468884310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxsjw,Uid:0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c,Namespace:calico-system,Attempt:3,}" Feb 13 15:35:00.487798 sshd[4354]: Connection closed by 10.0.0.1 port 45532 Feb 13 15:35:00.488217 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:00.502775 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:45532.service: Deactivated successfully. Feb 13 15:35:00.507239 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:35:00.510094 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:35:00.524153 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:45534.service - OpenSSH per-connection server daemon (10.0.0.1:45534). Feb 13 15:35:00.564467 systemd-logind[1449]: Removed session 15. Feb 13 15:35:00.592588 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 45534 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:00.600619 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:00.616490 systemd-logind[1449]: New session 16 of user core. Feb 13 15:35:00.620659 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:35:01.033590 sshd[4493]: Connection closed by 10.0.0.1 port 45534 Feb 13 15:35:01.034371 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:01.039204 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:45534.service: Deactivated successfully. Feb 13 15:35:01.041756 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:35:01.042793 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:35:01.043974 systemd-logind[1449]: Removed session 16. Feb 13 15:35:01.062511 systemd-networkd[1406]: cali40cc0fec4c9: Link UP Feb 13 15:35:01.063339 systemd-networkd[1406]: cali40cc0fec4c9: Gained carrier Feb 13 15:35:01.072445 kubelet[2681]: I0213 15:35:01.072302 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5jh6l" podStartSLOduration=1.479756114 podStartE2EDuration="30.072283162s" podCreationTimestamp="2025-02-13 15:34:31 +0000 UTC" firstStartedPulling="2025-02-13 15:34:31.451362863 +0000 UTC m=+25.489295729" lastFinishedPulling="2025-02-13 15:35:00.043889911 +0000 UTC m=+54.081822777" observedRunningTime="2025-02-13 15:35:00.385208426 +0000 UTC m=+54.423141282" watchObservedRunningTime="2025-02-13 15:35:01.072283162 +0000 UTC m=+55.110216028" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:00.650 [INFO][4442] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:00.889 [INFO][4442] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--hknph-eth0 coredns-7db6d8ff4d- kube-system 5cb650da-ab42-472c-a151-9ecc42bf5e46 809 0 2025-02-13 15:34:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-hknph eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40cc0fec4c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hknph" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hknph-" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:00.889 [INFO][4442] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hknph" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.004 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" HandleID="k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Workload="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.022 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" HandleID="k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Workload="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003be590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-hknph", "timestamp":"2025-02-13 15:35:01.004061636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.023 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.023 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.023 [INFO][4529] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.026 [INFO][4529] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.032 [INFO][4529] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.036 [INFO][4529] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.038 [INFO][4529] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.041 [INFO][4529] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.041 [INFO][4529] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.042 [INFO][4529] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.046 [INFO][4529] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.050 [INFO][4529] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.050 [INFO][4529] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" host="localhost" Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.050 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:35:01.080783 containerd[1466]: 2025-02-13 15:35:01.050 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" HandleID="k8s-pod-network.1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Workload="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" Feb 13 15:35:01.081624 containerd[1466]: 2025-02-13 15:35:01.053 [INFO][4442] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hknph" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hknph-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5cb650da-ab42-472c-a151-9ecc42bf5e46", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-hknph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40cc0fec4c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.081624 containerd[1466]: 2025-02-13 15:35:01.053 [INFO][4442] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hknph" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" Feb 13 15:35:01.081624 containerd[1466]: 2025-02-13 15:35:01.053 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40cc0fec4c9 ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hknph" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" Feb 13 15:35:01.081624 containerd[1466]: 2025-02-13 15:35:01.063 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hknph" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" Feb 13 15:35:01.081624 containerd[1466]: 2025-02-13 15:35:01.063 [INFO][4442] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hknph" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hknph-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5cb650da-ab42-472c-a151-9ecc42bf5e46", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab", Pod:"coredns-7db6d8ff4d-hknph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40cc0fec4c9", MAC:"ea:47:8f:56:85:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.081624 containerd[1466]: 2025-02-13 15:35:01.076 [INFO][4442] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hknph" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hknph-eth0" Feb 13 15:35:01.090803 systemd-networkd[1406]: cali3d89705ff3d: Link UP Feb 13 15:35:01.091371 systemd-networkd[1406]: cali3d89705ff3d: Gained carrier Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:00.935 [INFO][4514] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:00.961 [INFO][4514] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--768d56694b--826sp-eth0 calico-apiserver-768d56694b- calico-apiserver 866f7a22-7bcf-4b9e-a920-3e5d167eac08 805 0 2025-02-13 15:34:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768d56694b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-768d56694b-826sp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3d89705ff3d [] []}} ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-826sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--826sp-" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:00.961 [INFO][4514] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-826sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.013 [INFO][4564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" HandleID="k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Workload="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.027 [INFO][4564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" HandleID="k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Workload="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-768d56694b-826sp", "timestamp":"2025-02-13 15:35:01.01338723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.027 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.050 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.050 [INFO][4564] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.052 [INFO][4564] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.056 [INFO][4564] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.060 [INFO][4564] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.061 [INFO][4564] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.065 [INFO][4564] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.065 [INFO][4564] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.067 [INFO][4564] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818 Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.072 [INFO][4564] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.081 [INFO][4564] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.081 [INFO][4564] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" host="localhost" Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.081 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:35:01.102549 containerd[1466]: 2025-02-13 15:35:01.082 [INFO][4564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" HandleID="k8s-pod-network.33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Workload="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" Feb 13 15:35:01.103099 containerd[1466]: 2025-02-13 15:35:01.086 [INFO][4514] cni-plugin/k8s.go 386: Populated endpoint ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-826sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768d56694b--826sp-eth0", GenerateName:"calico-apiserver-768d56694b-", Namespace:"calico-apiserver", SelfLink:"", UID:"866f7a22-7bcf-4b9e-a920-3e5d167eac08", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768d56694b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-768d56694b-826sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d89705ff3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.103099 containerd[1466]: 2025-02-13 15:35:01.086 [INFO][4514] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-826sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" Feb 13 15:35:01.103099 containerd[1466]: 2025-02-13 15:35:01.086 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d89705ff3d ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-826sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" Feb 13 15:35:01.103099 containerd[1466]: 2025-02-13 15:35:01.091 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-826sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" Feb 13 15:35:01.103099 containerd[1466]: 2025-02-13 15:35:01.092 [INFO][4514] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-826sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768d56694b--826sp-eth0", GenerateName:"calico-apiserver-768d56694b-", Namespace:"calico-apiserver", SelfLink:"", UID:"866f7a22-7bcf-4b9e-a920-3e5d167eac08", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768d56694b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818", Pod:"calico-apiserver-768d56694b-826sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d89705ff3d", MAC:"22:21:3a:77:f6:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.103099 containerd[1466]: 2025-02-13 15:35:01.098 [INFO][4514] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-826sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--826sp-eth0" Feb 13 15:35:01.120784 systemd-networkd[1406]: cali147afb17dc7: Link UP Feb 13 15:35:01.121046 systemd-networkd[1406]: cali147afb17dc7: Gained carrier Feb 13 15:35:01.360246 systemd-networkd[1406]: calic59e661cc3b: Link UP Feb 13 15:35:01.362521 systemd-networkd[1406]: calic59e661cc3b: Gained carrier Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:00.833 [INFO][4430] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:00.887 [INFO][4430] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0 coredns-7db6d8ff4d- kube-system a3315812-4b88-46a2-9ae1-f680abf2a097 812 0 2025-02-13 15:34:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8j5t5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali147afb17dc7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8j5t5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8j5t5-" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:00.887 [INFO][4430] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8j5t5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.015 [INFO][4552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" HandleID="k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Workload="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.028 [INFO][4552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" HandleID="k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Workload="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037ebf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8j5t5", "timestamp":"2025-02-13 15:35:01.011762979 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.028 [INFO][4552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.081 [INFO][4552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.082 [INFO][4552] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.084 [INFO][4552] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.089 [INFO][4552] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.097 [INFO][4552] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.100 [INFO][4552] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.102 [INFO][4552] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.102 [INFO][4552] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.104 [INFO][4552] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37 Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.107 [INFO][4552] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.112 [INFO][4552] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.112 [INFO][4552] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" host="localhost" Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.113 [INFO][4552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:35:01.394026 containerd[1466]: 2025-02-13 15:35:01.113 [INFO][4552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" HandleID="k8s-pod-network.47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Workload="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" Feb 13 15:35:01.399104 containerd[1466]: 2025-02-13 15:35:01.117 [INFO][4430] cni-plugin/k8s.go 386: Populated endpoint ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8j5t5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a3315812-4b88-46a2-9ae1-f680abf2a097", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8j5t5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali147afb17dc7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.399104 containerd[1466]: 2025-02-13 15:35:01.117 [INFO][4430] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8j5t5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" Feb 13 15:35:01.399104 containerd[1466]: 2025-02-13 15:35:01.117 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali147afb17dc7 ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8j5t5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" Feb 13 15:35:01.399104 containerd[1466]: 2025-02-13 15:35:01.120 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8j5t5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" Feb 13 15:35:01.399104 containerd[1466]: 2025-02-13 15:35:01.121 [INFO][4430] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8j5t5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a3315812-4b88-46a2-9ae1-f680abf2a097", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37", Pod:"coredns-7db6d8ff4d-8j5t5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali147afb17dc7", MAC:"d6:8a:75:e1:79:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.399104 containerd[1466]: 2025-02-13 15:35:01.388 [INFO][4430] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8j5t5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8j5t5-eth0" Feb 13 15:35:01.405084 kubelet[2681]: E0213 15:35:01.405042 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:00.592 [INFO][4418] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:00.886 [INFO][4418] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0 calico-kube-controllers-68948865cb- calico-system 9915f1a9-705a-4e2d-8b4c-287c91d00c76 811 0 2025-02-13 15:34:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68948865cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-68948865cb-5zwq2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic59e661cc3b [] []}} ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Namespace="calico-system" Pod="calico-kube-controllers-68948865cb-5zwq2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:00.886 [INFO][4418] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Namespace="calico-system" Pod="calico-kube-controllers-68948865cb-5zwq2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.012 [INFO][4528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" HandleID="k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Workload="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.031 [INFO][4528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" HandleID="k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Workload="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68948865cb-5zwq2", "timestamp":"2025-02-13 15:35:01.012488859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.031 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.113 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.113 [INFO][4528] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.115 [INFO][4528] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.122 [INFO][4528] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.193 [INFO][4528] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.195 [INFO][4528] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.197 [INFO][4528] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.197 [INFO][4528] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.266 [INFO][4528] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5 Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.293 [INFO][4528] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.353 [INFO][4528] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.353 [INFO][4528] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" host="localhost" Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.353 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:35:01.411263 containerd[1466]: 2025-02-13 15:35:01.353 [INFO][4528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" HandleID="k8s-pod-network.d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Workload="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" Feb 13 15:35:01.412218 containerd[1466]: 2025-02-13 15:35:01.356 [INFO][4418] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Namespace="calico-system" Pod="calico-kube-controllers-68948865cb-5zwq2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0", GenerateName:"calico-kube-controllers-68948865cb-", Namespace:"calico-system", SelfLink:"", UID:"9915f1a9-705a-4e2d-8b4c-287c91d00c76", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68948865cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68948865cb-5zwq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic59e661cc3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.412218 containerd[1466]: 2025-02-13 15:35:01.356 [INFO][4418] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Namespace="calico-system" Pod="calico-kube-controllers-68948865cb-5zwq2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" Feb 13 15:35:01.412218 containerd[1466]: 2025-02-13 15:35:01.356 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic59e661cc3b ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Namespace="calico-system" Pod="calico-kube-controllers-68948865cb-5zwq2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" Feb 13 15:35:01.412218 containerd[1466]: 2025-02-13 15:35:01.361 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Namespace="calico-system" Pod="calico-kube-controllers-68948865cb-5zwq2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" Feb 13 15:35:01.412218 containerd[1466]: 2025-02-13 15:35:01.363 [INFO][4418] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Namespace="calico-system" Pod="calico-kube-controllers-68948865cb-5zwq2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0", GenerateName:"calico-kube-controllers-68948865cb-", Namespace:"calico-system", SelfLink:"", UID:"9915f1a9-705a-4e2d-8b4c-287c91d00c76", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68948865cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5", Pod:"calico-kube-controllers-68948865cb-5zwq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic59e661cc3b", MAC:"12:10:a5:33:1d:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.412218 containerd[1466]: 2025-02-13 15:35:01.399 [INFO][4418] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5" Namespace="calico-system" Pod="calico-kube-controllers-68948865cb-5zwq2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68948865cb--5zwq2-eth0" Feb 13 15:35:01.434658 containerd[1466]: time="2025-02-13T15:35:01.432096216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:01.434658 containerd[1466]: time="2025-02-13T15:35:01.432163126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:01.434658 containerd[1466]: time="2025-02-13T15:35:01.432173405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.434658 containerd[1466]: time="2025-02-13T15:35:01.432251555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.450345 systemd-networkd[1406]: cali7d484e8c52a: Link UP Feb 13 15:35:01.451928 systemd-networkd[1406]: cali7d484e8c52a: Gained carrier Feb 13 15:35:01.454673 containerd[1466]: time="2025-02-13T15:35:01.452121331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:01.454673 containerd[1466]: time="2025-02-13T15:35:01.452188441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:01.454673 containerd[1466]: time="2025-02-13T15:35:01.452202598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.454673 containerd[1466]: time="2025-02-13T15:35:01.452304355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.481930 systemd[1]: Started cri-containerd-1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab.scope - libcontainer container 1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab. Feb 13 15:35:01.485026 containerd[1466]: time="2025-02-13T15:35:01.484820674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:01.485026 containerd[1466]: time="2025-02-13T15:35:01.484934995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:00.649 [INFO][4441] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:00.893 [INFO][4441] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0 calico-apiserver-768d56694b- calico-apiserver b07b9bcf-86ee-434f-97e1-b4c13955f24e 813 0 2025-02-13 15:34:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768d56694b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-768d56694b-6nms6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7d484e8c52a [] []}} ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-6nms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--6nms6-" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:00.893 [INFO][4441] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-6nms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.014 [INFO][4533] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" HandleID="k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Workload="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.032 [INFO][4533] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" HandleID="k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Workload="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132ad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-768d56694b-6nms6", "timestamp":"2025-02-13 15:35:01.014875298 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.032 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.353 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.353 [INFO][4533] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.356 [INFO][4533] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.360 [INFO][4533] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.389 [INFO][4533] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.392 [INFO][4533] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.397 [INFO][4533] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.398 [INFO][4533] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.400 [INFO][4533] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1 Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.408 [INFO][4533] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.419 [INFO][4533] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.419 [INFO][4533] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" host="localhost" Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.419 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:35:01.485241 containerd[1466]: 2025-02-13 15:35:01.419 [INFO][4533] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" HandleID="k8s-pod-network.5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Workload="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" Feb 13 15:35:01.485742 containerd[1466]: 2025-02-13 15:35:01.431 [INFO][4441] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-6nms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0", GenerateName:"calico-apiserver-768d56694b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b07b9bcf-86ee-434f-97e1-b4c13955f24e", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768d56694b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-768d56694b-6nms6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d484e8c52a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.485742 containerd[1466]: 2025-02-13 15:35:01.432 [INFO][4441] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-6nms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" Feb 13 15:35:01.485742 containerd[1466]: 2025-02-13 15:35:01.432 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d484e8c52a ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-6nms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" Feb 13 15:35:01.485742 containerd[1466]: 2025-02-13 15:35:01.451 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-6nms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" Feb 13 15:35:01.485742 containerd[1466]: 2025-02-13 15:35:01.466 [INFO][4441] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-6nms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0", GenerateName:"calico-apiserver-768d56694b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b07b9bcf-86ee-434f-97e1-b4c13955f24e", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768d56694b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1", Pod:"calico-apiserver-768d56694b-6nms6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d484e8c52a", MAC:"22:78:8f:1d:1e:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.485742 containerd[1466]: 2025-02-13 15:35:01.474 [INFO][4441] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1" Namespace="calico-apiserver" Pod="calico-apiserver-768d56694b-6nms6" WorkloadEndpoint="localhost-k8s-calico--apiserver--768d56694b--6nms6-eth0" Feb 13 15:35:01.489548 containerd[1466]: time="2025-02-13T15:35:01.486939659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:01.489548 containerd[1466]: time="2025-02-13T15:35:01.486975778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:01.489548 containerd[1466]: time="2025-02-13T15:35:01.486986068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.489548 containerd[1466]: time="2025-02-13T15:35:01.487072804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.489760 containerd[1466]: time="2025-02-13T15:35:01.484946457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.494182 containerd[1466]: time="2025-02-13T15:35:01.493632598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.505684 systemd[1]: Started cri-containerd-33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818.scope - libcontainer container 33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818. Feb 13 15:35:01.520370 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:35:01.523729 systemd[1]: Started cri-containerd-d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5.scope - libcontainer container d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5. Feb 13 15:35:01.533002 systemd[1]: Started cri-containerd-47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37.scope - libcontainer container 47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37. Feb 13 15:35:01.558785 systemd-networkd[1406]: cali104f95a9f20: Link UP Feb 13 15:35:01.565272 systemd-networkd[1406]: cali104f95a9f20: Gained carrier Feb 13 15:35:01.569711 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:35:01.577351 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:35:01.591824 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:35:01.603022 containerd[1466]: time="2025-02-13T15:35:01.602246624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:01.603022 containerd[1466]: time="2025-02-13T15:35:01.602334314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:01.603022 containerd[1466]: time="2025-02-13T15:35:01.602351717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.603022 containerd[1466]: time="2025-02-13T15:35:01.602456138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:00.627 [INFO][4473] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:00.887 [INFO][4473] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kxsjw-eth0 csi-node-driver- calico-system 0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c 616 0 2025-02-13 15:34:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kxsjw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali104f95a9f20 [] []}} ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Namespace="calico-system" Pod="csi-node-driver-kxsjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxsjw-" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:00.887 [INFO][4473] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Namespace="calico-system" Pod="csi-node-driver-kxsjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxsjw-eth0" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.011 [INFO][4534] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" HandleID="k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Workload="localhost-k8s-csi--node--driver--kxsjw-eth0" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.032 [INFO][4534] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" HandleID="k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Workload="localhost-k8s-csi--node--driver--kxsjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kxsjw", "timestamp":"2025-02-13 15:35:01.011878211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.033 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.420 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.420 [INFO][4534] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.427 [INFO][4534] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.448 [INFO][4534] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.478 [INFO][4534] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.481 [INFO][4534] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.490 [INFO][4534] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.491 [INFO][4534] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.501 [INFO][4534] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.518 [INFO][4534] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.527 [INFO][4534] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.528 [INFO][4534] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" host="localhost" Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.528 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:35:01.607502 containerd[1466]: 2025-02-13 15:35:01.528 [INFO][4534] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" HandleID="k8s-pod-network.6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Workload="localhost-k8s-csi--node--driver--kxsjw-eth0" Feb 13 15:35:01.609249 containerd[1466]: 2025-02-13 15:35:01.555 [INFO][4473] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Namespace="calico-system" Pod="csi-node-driver-kxsjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxsjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kxsjw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kxsjw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali104f95a9f20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.609249 containerd[1466]: 2025-02-13 15:35:01.556 [INFO][4473] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Namespace="calico-system" Pod="csi-node-driver-kxsjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxsjw-eth0" Feb 13 15:35:01.609249 containerd[1466]: 2025-02-13 15:35:01.556 [INFO][4473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali104f95a9f20 ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Namespace="calico-system" Pod="csi-node-driver-kxsjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxsjw-eth0" Feb 13 15:35:01.609249 containerd[1466]: 2025-02-13 15:35:01.567 [INFO][4473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Namespace="calico-system" Pod="csi-node-driver-kxsjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxsjw-eth0" Feb 13 15:35:01.609249 containerd[1466]: 2025-02-13 15:35:01.575 [INFO][4473] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Namespace="calico-system" Pod="csi-node-driver-kxsjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxsjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kxsjw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 34, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b", Pod:"csi-node-driver-kxsjw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali104f95a9f20", MAC:"1e:d1:49:fb:61:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:35:01.609249 containerd[1466]: 2025-02-13 15:35:01.589 [INFO][4473] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b" Namespace="calico-system" Pod="csi-node-driver-kxsjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxsjw-eth0" Feb 13 15:35:01.609249 containerd[1466]: time="2025-02-13T15:35:01.606584575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hknph,Uid:5cb650da-ab42-472c-a151-9ecc42bf5e46,Namespace:kube-system,Attempt:3,} returns sandbox id \"1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab\"" Feb 13 15:35:01.609791 kubelet[2681]: E0213 15:35:01.607966 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:01.611330 containerd[1466]: time="2025-02-13T15:35:01.611236602Z" level=info msg="CreateContainer within sandbox \"1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:35:01.648800 systemd[1]: Started cri-containerd-5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1.scope - libcontainer container 5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1. Feb 13 15:35:01.660650 containerd[1466]: time="2025-02-13T15:35:01.660606564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68948865cb-5zwq2,Uid:9915f1a9-705a-4e2d-8b4c-287c91d00c76,Namespace:calico-system,Attempt:3,} returns sandbox id \"d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5\"" Feb 13 15:35:01.667087 containerd[1466]: time="2025-02-13T15:35:01.667030676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:35:01.667413 containerd[1466]: time="2025-02-13T15:35:01.667303331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8j5t5,Uid:a3315812-4b88-46a2-9ae1-f680abf2a097,Namespace:kube-system,Attempt:3,} returns sandbox id \"47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37\"" Feb 13 15:35:01.669273 containerd[1466]: time="2025-02-13T15:35:01.668674704Z" level=info msg="CreateContainer within sandbox \"1e4c93b02969c03923f1223bf2f777c725448fb8f9a0cffbf117c3519bd327ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75a9d0b5f4de07a9bc93176e17c4328932d180e5e16dcca060a7c99a68a9febc\"" Feb 13 15:35:01.669343 kubelet[2681]: E0213 15:35:01.668770 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:01.670890 containerd[1466]: time="2025-02-13T15:35:01.670840138Z" level=info msg="StartContainer for \"75a9d0b5f4de07a9bc93176e17c4328932d180e5e16dcca060a7c99a68a9febc\"" Feb 13 15:35:01.674347 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:35:01.677244 containerd[1466]: time="2025-02-13T15:35:01.677206519Z" level=info msg="CreateContainer within sandbox \"47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:35:01.679297 containerd[1466]: time="2025-02-13T15:35:01.679267430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-826sp,Uid:866f7a22-7bcf-4b9e-a920-3e5d167eac08,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818\"" Feb 13 15:35:01.693861 containerd[1466]: time="2025-02-13T15:35:01.693135559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:01.693861 containerd[1466]: time="2025-02-13T15:35:01.693200214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:01.693861 containerd[1466]: time="2025-02-13T15:35:01.693211355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.693861 containerd[1466]: time="2025-02-13T15:35:01.693316578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:01.712661 systemd[1]: Started cri-containerd-75a9d0b5f4de07a9bc93176e17c4328932d180e5e16dcca060a7c99a68a9febc.scope - libcontainer container 75a9d0b5f4de07a9bc93176e17c4328932d180e5e16dcca060a7c99a68a9febc. Feb 13 15:35:01.713968 containerd[1466]: time="2025-02-13T15:35:01.713913434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768d56694b-6nms6,Uid:b07b9bcf-86ee-434f-97e1-b4c13955f24e,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1\"" Feb 13 15:35:01.716604 systemd[1]: Started cri-containerd-6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b.scope - libcontainer container 6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b. Feb 13 15:35:01.733308 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:35:01.745860 containerd[1466]: time="2025-02-13T15:35:01.745808658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxsjw,Uid:0dd57d1b-4cfa-4ca9-82c9-3ab2ae38d94c,Namespace:calico-system,Attempt:3,} returns sandbox id \"6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b\"" Feb 13 15:35:01.851987 containerd[1466]: time="2025-02-13T15:35:01.851939027Z" level=info msg="StartContainer for \"75a9d0b5f4de07a9bc93176e17c4328932d180e5e16dcca060a7c99a68a9febc\" returns successfully" Feb 13 15:35:01.997768 containerd[1466]: time="2025-02-13T15:35:01.997718344Z" level=info msg="CreateContainer within sandbox \"47da915429c67299eda793d1b1506497e57de7b8aa29d6ce42882b9770866c37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa2be6b2f3b7dcb479ee59437d2bb3278db106fbd86a5a5b25eebd67bb6de48c\"" Feb 13 15:35:01.998533 containerd[1466]: time="2025-02-13T15:35:01.998495523Z" level=info msg="StartContainer for \"fa2be6b2f3b7dcb479ee59437d2bb3278db106fbd86a5a5b25eebd67bb6de48c\"" Feb 13 15:35:02.033731 systemd[1]: Started cri-containerd-fa2be6b2f3b7dcb479ee59437d2bb3278db106fbd86a5a5b25eebd67bb6de48c.scope - libcontainer container fa2be6b2f3b7dcb479ee59437d2bb3278db106fbd86a5a5b25eebd67bb6de48c. Feb 13 15:35:02.127516 kernel: bpftool[5127]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:35:02.168310 containerd[1466]: time="2025-02-13T15:35:02.168253077Z" level=info msg="StartContainer for \"fa2be6b2f3b7dcb479ee59437d2bb3278db106fbd86a5a5b25eebd67bb6de48c\" returns successfully" Feb 13 15:35:02.395326 systemd-networkd[1406]: vxlan.calico: Link UP Feb 13 15:35:02.395335 systemd-networkd[1406]: vxlan.calico: Gained carrier Feb 13 15:35:02.410595 kubelet[2681]: E0213 15:35:02.410537 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:02.416544 kubelet[2681]: E0213 15:35:02.415871 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:02.434357 kubelet[2681]: I0213 15:35:02.429023 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hknph" podStartSLOduration=40.428634925 podStartE2EDuration="40.428634925s" podCreationTimestamp="2025-02-13 15:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:02.424140068 +0000 UTC m=+56.462072934" watchObservedRunningTime="2025-02-13 15:35:02.428634925 +0000 UTC m=+56.466567791" Feb 13 15:35:02.625645 systemd-networkd[1406]: cali147afb17dc7: Gained IPv6LL Feb 13 15:35:02.689803 systemd-networkd[1406]: cali3d89705ff3d: Gained IPv6LL Feb 13 15:35:02.817770 systemd-networkd[1406]: cali40cc0fec4c9: Gained IPv6LL Feb 13 15:35:02.945642 systemd-networkd[1406]: calic59e661cc3b: Gained IPv6LL Feb 13 15:35:03.422540 kubelet[2681]: E0213 15:35:03.422420 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:03.422540 kubelet[2681]: E0213 15:35:03.422524 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:03.457759 systemd-networkd[1406]: cali7d484e8c52a: Gained IPv6LL Feb 13 15:35:03.458181 systemd-networkd[1406]: cali104f95a9f20: Gained IPv6LL Feb 13 15:35:04.161667 systemd-networkd[1406]: vxlan.calico: Gained IPv6LL Feb 13 15:35:04.424168 kubelet[2681]: E0213 15:35:04.423813 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:04.424168 kubelet[2681]: E0213 15:35:04.424167 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:05.472749 containerd[1466]: time="2025-02-13T15:35:05.472689215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:05.473453 containerd[1466]: time="2025-02-13T15:35:05.473409739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 15:35:05.474562 containerd[1466]: time="2025-02-13T15:35:05.474521566Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:05.476715 containerd[1466]: time="2025-02-13T15:35:05.476672821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:05.477361 containerd[1466]: time="2025-02-13T15:35:05.477320065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.810250124s" Feb 13 15:35:05.477361 containerd[1466]: time="2025-02-13T15:35:05.477355253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 15:35:05.478275 containerd[1466]: time="2025-02-13T15:35:05.478233942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:35:05.486822 containerd[1466]: time="2025-02-13T15:35:05.486773073Z" level=info msg="CreateContainer within sandbox \"d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:35:05.503701 containerd[1466]: time="2025-02-13T15:35:05.503648216Z" level=info msg="CreateContainer within sandbox \"d988281d5d2329c39dda0b5d23676a3adde14b0713ab1fc84ab64e53dd7dd8a5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"18ed2636c414440aff23e65a79a67aeff87073eb3f6043c966bb615506db55b2\"" Feb 13 15:35:05.504229 containerd[1466]: time="2025-02-13T15:35:05.504184687Z" level=info msg="StartContainer for \"18ed2636c414440aff23e65a79a67aeff87073eb3f6043c966bb615506db55b2\"" Feb 13 15:35:05.529613 systemd[1]: Started cri-containerd-18ed2636c414440aff23e65a79a67aeff87073eb3f6043c966bb615506db55b2.scope - libcontainer container 18ed2636c414440aff23e65a79a67aeff87073eb3f6043c966bb615506db55b2. Feb 13 15:35:05.569649 containerd[1466]: time="2025-02-13T15:35:05.569597488Z" level=info msg="StartContainer for \"18ed2636c414440aff23e65a79a67aeff87073eb3f6043c966bb615506db55b2\" returns successfully" Feb 13 15:35:06.028119 containerd[1466]: time="2025-02-13T15:35:06.028077223Z" level=info msg="StopPodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\"" Feb 13 15:35:06.028301 containerd[1466]: time="2025-02-13T15:35:06.028226610Z" level=info msg="TearDown network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" successfully" Feb 13 15:35:06.028301 containerd[1466]: time="2025-02-13T15:35:06.028239134Z" level=info msg="StopPodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" returns successfully" Feb 13 15:35:06.035283 containerd[1466]: time="2025-02-13T15:35:06.035224261Z" level=info msg="RemovePodSandbox for \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\"" Feb 13 15:35:06.046194 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:45546.service - OpenSSH per-connection server daemon (10.0.0.1:45546). Feb 13 15:35:06.047191 containerd[1466]: time="2025-02-13T15:35:06.047100702Z" level=info msg="Forcibly stopping sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\"" Feb 13 15:35:06.047302 containerd[1466]: time="2025-02-13T15:35:06.047222696Z" level=info msg="TearDown network for sandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" successfully" Feb 13 15:35:06.061277 containerd[1466]: time="2025-02-13T15:35:06.061204852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.061415 containerd[1466]: time="2025-02-13T15:35:06.061297049Z" level=info msg="RemovePodSandbox \"3cf8592e287c0e644958909a29f7b21a067f3574252b697ec89743fa3409c3f5\" returns successfully" Feb 13 15:35:06.062018 containerd[1466]: time="2025-02-13T15:35:06.061952809Z" level=info msg="StopPodSandbox for \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\"" Feb 13 15:35:06.062233 containerd[1466]: time="2025-02-13T15:35:06.062209481Z" level=info msg="TearDown network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\" successfully" Feb 13 15:35:06.062233 containerd[1466]: time="2025-02-13T15:35:06.062229410Z" level=info msg="StopPodSandbox for \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\" returns successfully" Feb 13 15:35:06.062674 containerd[1466]: time="2025-02-13T15:35:06.062646912Z" level=info msg="RemovePodSandbox for \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\"" Feb 13 15:35:06.062721 containerd[1466]: time="2025-02-13T15:35:06.062673252Z" level=info msg="Forcibly stopping sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\"" Feb 13 15:35:06.062814 containerd[1466]: time="2025-02-13T15:35:06.062758787Z" level=info msg="TearDown network for sandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\" successfully" Feb 13 15:35:06.067886 containerd[1466]: time="2025-02-13T15:35:06.067831791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.067942 containerd[1466]: time="2025-02-13T15:35:06.067898599Z" level=info msg="RemovePodSandbox \"91cadcf0ad0ad0b72807c3d4917d9266b47e4e9ac6a06a4cde24a3f30d4daf05\" returns successfully" Feb 13 15:35:06.069570 containerd[1466]: time="2025-02-13T15:35:06.069131367Z" level=info msg="StopPodSandbox for \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\"" Feb 13 15:35:06.069570 containerd[1466]: time="2025-02-13T15:35:06.069253100Z" level=info msg="TearDown network for sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\" successfully" Feb 13 15:35:06.069570 containerd[1466]: time="2025-02-13T15:35:06.069267037Z" level=info msg="StopPodSandbox for \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\" returns successfully" Feb 13 15:35:06.069687 containerd[1466]: time="2025-02-13T15:35:06.069596831Z" level=info msg="RemovePodSandbox for \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\"" Feb 13 15:35:06.069687 containerd[1466]: time="2025-02-13T15:35:06.069618492Z" level=info msg="Forcibly stopping sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\"" Feb 13 15:35:06.069746 containerd[1466]: time="2025-02-13T15:35:06.069702394Z" level=info msg="TearDown network for sandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\" successfully" Feb 13 15:35:06.074408 containerd[1466]: time="2025-02-13T15:35:06.074374327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.074408 containerd[1466]: time="2025-02-13T15:35:06.074416718Z" level=info msg="RemovePodSandbox \"5cbb429b6d1493801d0f80c708652d1366912d92c0c11e3826ca968954c341e5\" returns successfully" Feb 13 15:35:06.074704 containerd[1466]: time="2025-02-13T15:35:06.074681307Z" level=info msg="StopPodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\"" Feb 13 15:35:06.074835 containerd[1466]: time="2025-02-13T15:35:06.074783082Z" level=info msg="TearDown network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" successfully" Feb 13 15:35:06.074863 containerd[1466]: time="2025-02-13T15:35:06.074833539Z" level=info msg="StopPodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" returns successfully" Feb 13 15:35:06.075171 containerd[1466]: time="2025-02-13T15:35:06.075114489Z" level=info msg="RemovePodSandbox for \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\"" Feb 13 15:35:06.075214 containerd[1466]: time="2025-02-13T15:35:06.075178863Z" level=info msg="Forcibly stopping sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\"" Feb 13 15:35:06.075302 containerd[1466]: time="2025-02-13T15:35:06.075265479Z" level=info msg="TearDown network for sandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" successfully" Feb 13 15:35:06.079070 containerd[1466]: time="2025-02-13T15:35:06.079037724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.079129 containerd[1466]: time="2025-02-13T15:35:06.079076058Z" level=info msg="RemovePodSandbox \"a58b7f0ebc77f13a9d5e30f36f9831b70e15def9ae55761e4ed9147de925541e\" returns successfully" Feb 13 15:35:06.079379 containerd[1466]: time="2025-02-13T15:35:06.079357247Z" level=info msg="StopPodSandbox for \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\"" Feb 13 15:35:06.079492 containerd[1466]: time="2025-02-13T15:35:06.079452892Z" level=info msg="TearDown network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\" successfully" Feb 13 15:35:06.079531 containerd[1466]: time="2025-02-13T15:35:06.079487948Z" level=info msg="StopPodSandbox for \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\" returns successfully" Feb 13 15:35:06.079744 containerd[1466]: time="2025-02-13T15:35:06.079712300Z" level=info msg="RemovePodSandbox for \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\"" Feb 13 15:35:06.079744 containerd[1466]: time="2025-02-13T15:35:06.079737889Z" level=info msg="Forcibly stopping sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\"" Feb 13 15:35:06.079847 containerd[1466]: time="2025-02-13T15:35:06.079812963Z" level=info msg="TearDown network for sandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\" successfully" Feb 13 15:35:06.083556 containerd[1466]: time="2025-02-13T15:35:06.083514262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.083632 containerd[1466]: time="2025-02-13T15:35:06.083564649Z" level=info msg="RemovePodSandbox \"ef78a1c8f88311e958f6810c3b3bc038e1bdf8140dc73507f8847a6bfb1e8e66\" returns successfully" Feb 13 15:35:06.083859 containerd[1466]: time="2025-02-13T15:35:06.083833586Z" level=info msg="StopPodSandbox for \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\"" Feb 13 15:35:06.083943 containerd[1466]: time="2025-02-13T15:35:06.083917347Z" level=info msg="TearDown network for sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\" successfully" Feb 13 15:35:06.083943 containerd[1466]: time="2025-02-13T15:35:06.083931223Z" level=info msg="StopPodSandbox for \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\" returns successfully" Feb 13 15:35:06.084845 containerd[1466]: time="2025-02-13T15:35:06.084292207Z" level=info msg="RemovePodSandbox for \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\"" Feb 13 15:35:06.084845 containerd[1466]: time="2025-02-13T15:35:06.084321873Z" level=info msg="Forcibly stopping sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\"" Feb 13 15:35:06.084845 containerd[1466]: time="2025-02-13T15:35:06.084404522Z" level=info msg="TearDown network for sandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\" successfully" Feb 13 15:35:06.088563 containerd[1466]: time="2025-02-13T15:35:06.088538212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.088636 containerd[1466]: time="2025-02-13T15:35:06.088572978Z" level=info msg="RemovePodSandbox \"931e89498f7e7bb32314019cf959df67fba95108c95cc20a72e070989b3a8525\" returns successfully" Feb 13 15:35:06.088854 containerd[1466]: time="2025-02-13T15:35:06.088810715Z" level=info msg="StopPodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\"" Feb 13 15:35:06.088916 containerd[1466]: time="2025-02-13T15:35:06.088902311Z" level=info msg="TearDown network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" successfully" Feb 13 15:35:06.088939 containerd[1466]: time="2025-02-13T15:35:06.088916358Z" level=info msg="StopPodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" returns successfully" Feb 13 15:35:06.089163 containerd[1466]: time="2025-02-13T15:35:06.089134347Z" level=info msg="RemovePodSandbox for \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\"" Feb 13 15:35:06.089203 containerd[1466]: time="2025-02-13T15:35:06.089160858Z" level=info msg="Forcibly stopping sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\"" Feb 13 15:35:06.089252 containerd[1466]: time="2025-02-13T15:35:06.089225041Z" level=info msg="TearDown network for sandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" successfully" Feb 13 15:35:06.093276 containerd[1466]: time="2025-02-13T15:35:06.093239732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.093315 containerd[1466]: time="2025-02-13T15:35:06.093277855Z" level=info msg="RemovePodSandbox \"bf03668ee8f46ca503b13cbbc3332dd3d1d63c1f8879778e28ce3e1bd73c2a53\" returns successfully" Feb 13 15:35:06.093654 containerd[1466]: time="2025-02-13T15:35:06.093627577Z" level=info msg="StopPodSandbox for \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\"" Feb 13 15:35:06.093737 containerd[1466]: time="2025-02-13T15:35:06.093713903Z" level=info msg="TearDown network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\" successfully" Feb 13 15:35:06.093737 containerd[1466]: time="2025-02-13T15:35:06.093731888Z" level=info msg="StopPodSandbox for \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\" returns successfully" Feb 13 15:35:06.094028 containerd[1466]: time="2025-02-13T15:35:06.094005012Z" level=info msg="RemovePodSandbox for \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\"" Feb 13 15:35:06.094028 containerd[1466]: time="2025-02-13T15:35:06.094028978Z" level=info msg="Forcibly stopping sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\"" Feb 13 15:35:06.094271 containerd[1466]: time="2025-02-13T15:35:06.094227779Z" level=info msg="TearDown network for sandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\" successfully" Feb 13 15:35:06.097939 containerd[1466]: time="2025-02-13T15:35:06.097906325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.098013 containerd[1466]: time="2025-02-13T15:35:06.097950449Z" level=info msg="RemovePodSandbox \"a78703f3c41304073e75bafe7cb3002391b345ca6293b792a560ca4cef1a4e1d\" returns successfully" Feb 13 15:35:06.098450 containerd[1466]: time="2025-02-13T15:35:06.098256427Z" level=info msg="StopPodSandbox for \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\"" Feb 13 15:35:06.098450 containerd[1466]: time="2025-02-13T15:35:06.098352412Z" level=info msg="TearDown network for sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\" successfully" Feb 13 15:35:06.098450 containerd[1466]: time="2025-02-13T15:35:06.098391967Z" level=info msg="StopPodSandbox for \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\" returns successfully" Feb 13 15:35:06.098634 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 45546 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:06.099617 containerd[1466]: time="2025-02-13T15:35:06.099588486Z" level=info msg="RemovePodSandbox for \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\"" Feb 13 15:35:06.099667 containerd[1466]: time="2025-02-13T15:35:06.099618854Z" level=info msg="Forcibly stopping sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\"" Feb 13 15:35:06.099735 containerd[1466]: time="2025-02-13T15:35:06.099695051Z" level=info msg="TearDown network for sandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\" successfully" Feb 13 15:35:06.100496 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:06.103460 containerd[1466]: time="2025-02-13T15:35:06.103419775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.103539 containerd[1466]: time="2025-02-13T15:35:06.103465873Z" level=info msg="RemovePodSandbox \"f40440705c9b300cc7a2ea4e7f917b5a824a84e489ed752ec6cdcd045214e59b\" returns successfully" Feb 13 15:35:06.103802 containerd[1466]: time="2025-02-13T15:35:06.103779816Z" level=info msg="StopPodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\"" Feb 13 15:35:06.103901 containerd[1466]: time="2025-02-13T15:35:06.103871553Z" level=info msg="TearDown network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" successfully" Feb 13 15:35:06.103901 containerd[1466]: time="2025-02-13T15:35:06.103884738Z" level=info msg="StopPodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" returns successfully" Feb 13 15:35:06.104210 containerd[1466]: time="2025-02-13T15:35:06.104143315Z" level=info msg="RemovePodSandbox for \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\"" Feb 13 15:35:06.104210 containerd[1466]: time="2025-02-13T15:35:06.104167511Z" level=info msg="Forcibly stopping sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\"" Feb 13 15:35:06.104288 containerd[1466]: time="2025-02-13T15:35:06.104240601Z" level=info msg="TearDown network for sandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" successfully" Feb 13 15:35:06.104901 systemd-logind[1449]: New session 17 of user core. Feb 13 15:35:06.108205 containerd[1466]: time="2025-02-13T15:35:06.108164959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.108205 containerd[1466]: time="2025-02-13T15:35:06.108204324Z" level=info msg="RemovePodSandbox \"028e604fa58ce75f9c101a3c705c77e5e77a2066edd01f902697550b49d16505\" returns successfully" Feb 13 15:35:06.108490 containerd[1466]: time="2025-02-13T15:35:06.108451660Z" level=info msg="StopPodSandbox for \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\"" Feb 13 15:35:06.108592 containerd[1466]: time="2025-02-13T15:35:06.108565909Z" level=info msg="TearDown network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\" successfully" Feb 13 15:35:06.108592 containerd[1466]: time="2025-02-13T15:35:06.108585808Z" level=info msg="StopPodSandbox for \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\" returns successfully" Feb 13 15:35:06.108830 containerd[1466]: time="2025-02-13T15:35:06.108806251Z" level=info msg="RemovePodSandbox for \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\"" Feb 13 15:35:06.108867 containerd[1466]: time="2025-02-13T15:35:06.108831109Z" level=info msg="Forcibly stopping sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\"" Feb 13 15:35:06.108950 containerd[1466]: time="2025-02-13T15:35:06.108917545Z" level=info msg="TearDown network for sandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\" successfully" Feb 13 15:35:06.111627 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:35:06.113509 containerd[1466]: time="2025-02-13T15:35:06.113464027Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.113564 containerd[1466]: time="2025-02-13T15:35:06.113517981Z" level=info msg="RemovePodSandbox \"e93e38d255d630e305d0a44cde945d2003b2c357a5e619255381baa2e49a589e\" returns successfully" Feb 13 15:35:06.113821 containerd[1466]: time="2025-02-13T15:35:06.113782399Z" level=info msg="StopPodSandbox for \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\"" Feb 13 15:35:06.113899 containerd[1466]: time="2025-02-13T15:35:06.113872542Z" level=info msg="TearDown network for sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\" successfully" Feb 13 15:35:06.113899 containerd[1466]: time="2025-02-13T15:35:06.113888623Z" level=info msg="StopPodSandbox for \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\" returns successfully" Feb 13 15:35:06.114295 containerd[1466]: time="2025-02-13T15:35:06.114262371Z" level=info msg="RemovePodSandbox for \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\"" Feb 13 15:35:06.114350 containerd[1466]: time="2025-02-13T15:35:06.114296947Z" level=info msg="Forcibly stopping sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\"" Feb 13 15:35:06.114428 containerd[1466]: time="2025-02-13T15:35:06.114380297Z" level=info msg="TearDown network for sandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\" successfully" Feb 13 15:35:06.118411 containerd[1466]: time="2025-02-13T15:35:06.118382925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.118465 containerd[1466]: time="2025-02-13T15:35:06.118421760Z" level=info msg="RemovePodSandbox \"f70be7dcea2af3910a06c69a3b8187a6dfb33c134f34b77521a573178970798e\" returns successfully" Feb 13 15:35:06.118739 containerd[1466]: time="2025-02-13T15:35:06.118712097Z" level=info msg="StopPodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\"" Feb 13 15:35:06.118846 containerd[1466]: time="2025-02-13T15:35:06.118821487Z" level=info msg="TearDown network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" successfully" Feb 13 15:35:06.118846 containerd[1466]: time="2025-02-13T15:35:06.118838701Z" level=info msg="StopPodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" returns successfully" Feb 13 15:35:06.119144 containerd[1466]: time="2025-02-13T15:35:06.119121604Z" level=info msg="RemovePodSandbox for \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\"" Feb 13 15:35:06.119176 containerd[1466]: time="2025-02-13T15:35:06.119144408Z" level=info msg="Forcibly stopping sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\"" Feb 13 15:35:06.119234 containerd[1466]: time="2025-02-13T15:35:06.119208159Z" level=info msg="TearDown network for sandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" successfully" Feb 13 15:35:06.122841 containerd[1466]: time="2025-02-13T15:35:06.122806872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.122841 containerd[1466]: time="2025-02-13T15:35:06.122836108Z" level=info msg="RemovePodSandbox \"25c4145e7967a957ce35f3ea36db06181928e437ee211eaf1c3ed03cff126607\" returns successfully" Feb 13 15:35:06.123091 containerd[1466]: time="2025-02-13T15:35:06.123071881Z" level=info msg="StopPodSandbox for \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\"" Feb 13 15:35:06.123165 containerd[1466]: time="2025-02-13T15:35:06.123144430Z" level=info msg="TearDown network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\" successfully" Feb 13 15:35:06.123165 containerd[1466]: time="2025-02-13T15:35:06.123157836Z" level=info msg="StopPodSandbox for \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\" returns successfully" Feb 13 15:35:06.124382 containerd[1466]: time="2025-02-13T15:35:06.123382437Z" level=info msg="RemovePodSandbox for \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\"" Feb 13 15:35:06.124382 containerd[1466]: time="2025-02-13T15:35:06.123411653Z" level=info msg="Forcibly stopping sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\"" Feb 13 15:35:06.124382 containerd[1466]: time="2025-02-13T15:35:06.123506516Z" level=info msg="TearDown network for sandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\" successfully" Feb 13 15:35:06.127682 containerd[1466]: time="2025-02-13T15:35:06.127654943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.127732 containerd[1466]: time="2025-02-13T15:35:06.127707285Z" level=info msg="RemovePodSandbox \"619d2c65208126005ecaa7cedf5dd1a705da521bc1c364d3c0c9ddebf2e925a4\" returns successfully" Feb 13 15:35:06.128057 containerd[1466]: time="2025-02-13T15:35:06.128033401Z" level=info msg="StopPodSandbox for \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\"" Feb 13 15:35:06.128138 containerd[1466]: time="2025-02-13T15:35:06.128117802Z" level=info msg="TearDown network for sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\" successfully" Feb 13 15:35:06.128138 containerd[1466]: time="2025-02-13T15:35:06.128131689Z" level=info msg="StopPodSandbox for \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\" returns successfully" Feb 13 15:35:06.128375 containerd[1466]: time="2025-02-13T15:35:06.128354397Z" level=info msg="RemovePodSandbox for \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\"" Feb 13 15:35:06.128375 containerd[1466]: time="2025-02-13T15:35:06.128372743Z" level=info msg="Forcibly stopping sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\"" Feb 13 15:35:06.128463 containerd[1466]: time="2025-02-13T15:35:06.128429200Z" level=info msg="TearDown network for sandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\" successfully" Feb 13 15:35:06.132025 containerd[1466]: time="2025-02-13T15:35:06.131976715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.132025 containerd[1466]: time="2025-02-13T15:35:06.132027362Z" level=info msg="RemovePodSandbox \"8e24c51f48e312bcdce4181471ed3b56fa7bbe4953435624a5ffbe03cb394862\" returns successfully" Feb 13 15:35:06.132266 containerd[1466]: time="2025-02-13T15:35:06.132243147Z" level=info msg="StopPodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\"" Feb 13 15:35:06.132362 containerd[1466]: time="2025-02-13T15:35:06.132320184Z" level=info msg="TearDown network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" successfully" Feb 13 15:35:06.132362 containerd[1466]: time="2025-02-13T15:35:06.132333901Z" level=info msg="StopPodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" returns successfully" Feb 13 15:35:06.132657 containerd[1466]: time="2025-02-13T15:35:06.132627575Z" level=info msg="RemovePodSandbox for \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\"" Feb 13 15:35:06.132710 containerd[1466]: time="2025-02-13T15:35:06.132660759Z" level=info msg="Forcibly stopping sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\"" Feb 13 15:35:06.132805 containerd[1466]: time="2025-02-13T15:35:06.132756332Z" level=info msg="TearDown network for sandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" successfully" Feb 13 15:35:06.136977 containerd[1466]: time="2025-02-13T15:35:06.136942512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.137037 containerd[1466]: time="2025-02-13T15:35:06.137009761Z" level=info msg="RemovePodSandbox \"cba171ad9dc765b36a840779747ddfa07003f363611aa3c123f9760018c253ac\" returns successfully" Feb 13 15:35:06.137285 containerd[1466]: time="2025-02-13T15:35:06.137254152Z" level=info msg="StopPodSandbox for \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\"" Feb 13 15:35:06.137394 containerd[1466]: time="2025-02-13T15:35:06.137368491Z" level=info msg="TearDown network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\" successfully" Feb 13 15:35:06.137394 containerd[1466]: time="2025-02-13T15:35:06.137388870Z" level=info msg="StopPodSandbox for \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\" returns successfully" Feb 13 15:35:06.137625 containerd[1466]: time="2025-02-13T15:35:06.137590387Z" level=info msg="RemovePodSandbox for \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\"" Feb 13 15:35:06.137625 containerd[1466]: time="2025-02-13T15:35:06.137615676Z" level=info msg="Forcibly stopping sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\"" Feb 13 15:35:06.137726 containerd[1466]: time="2025-02-13T15:35:06.137690499Z" level=info msg="TearDown network for sandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\" successfully" Feb 13 15:35:06.141814 containerd[1466]: time="2025-02-13T15:35:06.141780905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.141860 containerd[1466]: time="2025-02-13T15:35:06.141829699Z" level=info msg="RemovePodSandbox \"b6ea3056be9052d9dfa211985510f1df9e34b0e916546c98d48f59da2a1e60f8\" returns successfully" Feb 13 15:35:06.142122 containerd[1466]: time="2025-02-13T15:35:06.142092103Z" level=info msg="StopPodSandbox for \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\"" Feb 13 15:35:06.142213 containerd[1466]: time="2025-02-13T15:35:06.142192937Z" level=info msg="TearDown network for sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\" successfully" Feb 13 15:35:06.142239 containerd[1466]: time="2025-02-13T15:35:06.142211353Z" level=info msg="StopPodSandbox for \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\" returns successfully" Feb 13 15:35:06.142567 containerd[1466]: time="2025-02-13T15:35:06.142536958Z" level=info msg="RemovePodSandbox for \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\"" Feb 13 15:35:06.142567 containerd[1466]: time="2025-02-13T15:35:06.142564791Z" level=info msg="Forcibly stopping sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\"" Feb 13 15:35:06.142699 containerd[1466]: time="2025-02-13T15:35:06.142648362Z" level=info msg="TearDown network for sandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\" successfully" Feb 13 15:35:06.146677 containerd[1466]: time="2025-02-13T15:35:06.146641081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:35:06.146759 containerd[1466]: time="2025-02-13T15:35:06.146690335Z" level=info msg="RemovePodSandbox \"8ceac55b61c23f8209d4177232e7c6fd894b5e43b111f0d29b6a66244bc10c66\" returns successfully" Feb 13 15:35:06.245009 sshd[5260]: Connection closed by 10.0.0.1 port 45546 Feb 13 15:35:06.245401 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:06.250273 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:45546.service: Deactivated successfully. Feb 13 15:35:06.253020 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:35:06.253806 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:35:06.254762 systemd-logind[1449]: Removed session 17. Feb 13 15:35:06.446786 kubelet[2681]: I0213 15:35:06.446731 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8j5t5" podStartSLOduration=45.446714246 podStartE2EDuration="45.446714246s" podCreationTimestamp="2025-02-13 15:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:02.462783603 +0000 UTC m=+56.500716480" watchObservedRunningTime="2025-02-13 15:35:06.446714246 +0000 UTC m=+60.484647112" Feb 13 15:35:06.448070 kubelet[2681]: I0213 15:35:06.447978 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68948865cb-5zwq2" podStartSLOduration=31.632498985 podStartE2EDuration="35.447972132s" podCreationTimestamp="2025-02-13 15:34:31 +0000 UTC" firstStartedPulling="2025-02-13 15:35:01.662629453 +0000 UTC m=+55.700562319" lastFinishedPulling="2025-02-13 15:35:05.4781026 +0000 UTC m=+59.516035466" observedRunningTime="2025-02-13 15:35:06.446160102 +0000 UTC m=+60.484092968" watchObservedRunningTime="2025-02-13 15:35:06.447972132 +0000 UTC m=+60.485904998" Feb 13 15:35:07.426159 kubelet[2681]: E0213 15:35:07.426116 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:09.738725 containerd[1466]: time="2025-02-13T15:35:09.738660472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:09.741069 containerd[1466]: time="2025-02-13T15:35:09.741000287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 15:35:09.742411 containerd[1466]: time="2025-02-13T15:35:09.742352871Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:09.744815 containerd[1466]: time="2025-02-13T15:35:09.744767159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:09.745438 containerd[1466]: time="2025-02-13T15:35:09.745399732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.267132657s" Feb 13 15:35:09.745438 containerd[1466]: time="2025-02-13T15:35:09.745432725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:35:09.747167 containerd[1466]: time="2025-02-13T15:35:09.746728790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:35:09.748502 containerd[1466]: time="2025-02-13T15:35:09.748451162Z" level=info msg="CreateContainer within sandbox \"33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:35:09.761371 containerd[1466]: time="2025-02-13T15:35:09.761315286Z" level=info msg="CreateContainer within sandbox \"33ab3c0d8b95bda04718a2f889b29f657498f114452d69940c11a79d3e5f9818\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"030774644e2dbc1bf9e7db873bdaf7473f646b07255809326ab25890eda6df78\"" Feb 13 15:35:09.762181 containerd[1466]: time="2025-02-13T15:35:09.761842817Z" level=info msg="StartContainer for \"030774644e2dbc1bf9e7db873bdaf7473f646b07255809326ab25890eda6df78\"" Feb 13 15:35:09.796671 systemd[1]: Started cri-containerd-030774644e2dbc1bf9e7db873bdaf7473f646b07255809326ab25890eda6df78.scope - libcontainer container 030774644e2dbc1bf9e7db873bdaf7473f646b07255809326ab25890eda6df78. Feb 13 15:35:09.838825 containerd[1466]: time="2025-02-13T15:35:09.838772087Z" level=info msg="StartContainer for \"030774644e2dbc1bf9e7db873bdaf7473f646b07255809326ab25890eda6df78\" returns successfully" Feb 13 15:35:10.348888 containerd[1466]: time="2025-02-13T15:35:10.348835357Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:10.349766 containerd[1466]: time="2025-02-13T15:35:10.349724591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:35:10.352259 containerd[1466]: time="2025-02-13T15:35:10.352224461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 605.46388ms" Feb 13 15:35:10.352259 containerd[1466]: time="2025-02-13T15:35:10.352251492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:35:10.353887 containerd[1466]: time="2025-02-13T15:35:10.353853152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:35:10.354865 containerd[1466]: time="2025-02-13T15:35:10.354837758Z" level=info msg="CreateContainer within sandbox \"5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:35:10.369816 containerd[1466]: time="2025-02-13T15:35:10.369771834Z" level=info msg="CreateContainer within sandbox \"5e4d3311058d3e4b5981d97fa5f9e7c1583a1bc841559c1030dccd9e85072ed1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc85f963d236d2c1403d9b423c9cb831add124876a951358fb5ea5c5d5df83fd\"" Feb 13 15:35:10.370425 containerd[1466]: time="2025-02-13T15:35:10.370380049Z" level=info msg="StartContainer for \"cc85f963d236d2c1403d9b423c9cb831add124876a951358fb5ea5c5d5df83fd\"" Feb 13 15:35:10.403661 systemd[1]: Started cri-containerd-cc85f963d236d2c1403d9b423c9cb831add124876a951358fb5ea5c5d5df83fd.scope - libcontainer container cc85f963d236d2c1403d9b423c9cb831add124876a951358fb5ea5c5d5df83fd. Feb 13 15:35:10.648442 containerd[1466]: time="2025-02-13T15:35:10.648263398Z" level=info msg="StartContainer for \"cc85f963d236d2c1403d9b423c9cb831add124876a951358fb5ea5c5d5df83fd\" returns successfully" Feb 13 15:35:10.683209 kubelet[2681]: I0213 15:35:10.682589 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-768d56694b-826sp" podStartSLOduration=32.61803969 podStartE2EDuration="40.682461845s" podCreationTimestamp="2025-02-13 15:34:30 +0000 UTC" firstStartedPulling="2025-02-13 15:35:01.682075131 +0000 UTC m=+55.720007997" lastFinishedPulling="2025-02-13 15:35:09.746497286 +0000 UTC m=+63.784430152" observedRunningTime="2025-02-13 15:35:10.681977708 +0000 UTC m=+64.719910574" watchObservedRunningTime="2025-02-13 15:35:10.682461845 +0000 UTC m=+64.720394711" Feb 13 15:35:11.269284 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:42118.service - OpenSSH per-connection server daemon (10.0.0.1:42118). Feb 13 15:35:11.327968 sshd[5415]: Accepted publickey for core from 10.0.0.1 port 42118 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:11.329603 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:11.333345 systemd-logind[1449]: New session 18 of user core. Feb 13 15:35:11.343631 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:35:11.570369 sshd[5417]: Connection closed by 10.0.0.1 port 42118 Feb 13 15:35:11.570838 sshd-session[5415]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:11.574816 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:42118.service: Deactivated successfully. Feb 13 15:35:11.577138 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:35:11.577771 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:35:11.578667 systemd-logind[1449]: Removed session 18. Feb 13 15:35:11.658679 kubelet[2681]: I0213 15:35:11.658647 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:35:11.712301 kubelet[2681]: I0213 15:35:11.712224 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-768d56694b-6nms6" podStartSLOduration=33.075471526 podStartE2EDuration="41.712207217s" podCreationTimestamp="2025-02-13 15:34:30 +0000 UTC" firstStartedPulling="2025-02-13 15:35:01.716227014 +0000 UTC m=+55.754159880" lastFinishedPulling="2025-02-13 15:35:10.352962705 +0000 UTC m=+64.390895571" observedRunningTime="2025-02-13 15:35:11.707131256 +0000 UTC m=+65.745064122" watchObservedRunningTime="2025-02-13 15:35:11.712207217 +0000 UTC m=+65.750140083" Feb 13 15:35:12.546879 containerd[1466]: time="2025-02-13T15:35:12.546824499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:12.547552 containerd[1466]: time="2025-02-13T15:35:12.547501274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:35:12.548720 containerd[1466]: time="2025-02-13T15:35:12.548692194Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:12.559922 containerd[1466]: time="2025-02-13T15:35:12.559880569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:12.560605 containerd[1466]: time="2025-02-13T15:35:12.560572544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.206685656s" Feb 13 15:35:12.560605 containerd[1466]: time="2025-02-13T15:35:12.560599014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:35:12.562866 containerd[1466]: time="2025-02-13T15:35:12.562833801Z" level=info msg="CreateContainer within sandbox \"6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:35:12.583298 containerd[1466]: time="2025-02-13T15:35:12.583244197Z" level=info msg="CreateContainer within sandbox \"6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8187ef37b5a715f886e00449b93849c04a36868f329346b9938bafd96d4f61c7\"" Feb 13 15:35:12.583728 containerd[1466]: time="2025-02-13T15:35:12.583697525Z" level=info msg="StartContainer for \"8187ef37b5a715f886e00449b93849c04a36868f329346b9938bafd96d4f61c7\"" Feb 13 15:35:12.619611 systemd[1]: Started cri-containerd-8187ef37b5a715f886e00449b93849c04a36868f329346b9938bafd96d4f61c7.scope - libcontainer container 8187ef37b5a715f886e00449b93849c04a36868f329346b9938bafd96d4f61c7. Feb 13 15:35:12.648631 containerd[1466]: time="2025-02-13T15:35:12.648592636Z" level=info msg="StartContainer for \"8187ef37b5a715f886e00449b93849c04a36868f329346b9938bafd96d4f61c7\" returns successfully" Feb 13 15:35:12.649633 containerd[1466]: time="2025-02-13T15:35:12.649599764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:35:12.660789 kubelet[2681]: I0213 15:35:12.660764 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:35:16.586676 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:35288.service - OpenSSH per-connection server daemon (10.0.0.1:35288). Feb 13 15:35:16.635575 sshd[5483]: Accepted publickey for core from 10.0.0.1 port 35288 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:16.637596 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:16.642255 systemd-logind[1449]: New session 19 of user core. Feb 13 15:35:16.648789 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:35:16.779921 sshd[5485]: Connection closed by 10.0.0.1 port 35288 Feb 13 15:35:16.780296 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:16.784291 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:35288.service: Deactivated successfully. Feb 13 15:35:16.786175 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:35:16.786863 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:35:16.787773 systemd-logind[1449]: Removed session 19. Feb 13 15:35:16.863069 containerd[1466]: time="2025-02-13T15:35:16.862921500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:16.864824 containerd[1466]: time="2025-02-13T15:35:16.864777987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:35:16.866742 containerd[1466]: time="2025-02-13T15:35:16.866687034Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:16.871536 containerd[1466]: time="2025-02-13T15:35:16.871495238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:16.872329 containerd[1466]: time="2025-02-13T15:35:16.872289275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 4.222658772s" Feb 13 15:35:16.872372 containerd[1466]: time="2025-02-13T15:35:16.872328350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:35:16.874452 containerd[1466]: time="2025-02-13T15:35:16.874411329Z" level=info msg="CreateContainer within sandbox \"6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:35:16.890107 containerd[1466]: time="2025-02-13T15:35:16.890047347Z" level=info msg="CreateContainer within sandbox \"6f230ae7116ffd840fbb0c5463b0a3ff1155655e3f8be608cfe7bbe92b7c723b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b033ef4daeefa5cfc73721cd4110faff54fce5ba55f68b8ebbfd3f0b13881f8c\"" Feb 13 15:35:16.890573 containerd[1466]: time="2025-02-13T15:35:16.890545529Z" level=info msg="StartContainer for \"b033ef4daeefa5cfc73721cd4110faff54fce5ba55f68b8ebbfd3f0b13881f8c\"" Feb 13 15:35:16.925622 systemd[1]: Started cri-containerd-b033ef4daeefa5cfc73721cd4110faff54fce5ba55f68b8ebbfd3f0b13881f8c.scope - libcontainer container b033ef4daeefa5cfc73721cd4110faff54fce5ba55f68b8ebbfd3f0b13881f8c. Feb 13 15:35:16.957319 containerd[1466]: time="2025-02-13T15:35:16.957252325Z" level=info msg="StartContainer for \"b033ef4daeefa5cfc73721cd4110faff54fce5ba55f68b8ebbfd3f0b13881f8c\" returns successfully" Feb 13 15:35:17.106501 kubelet[2681]: I0213 15:35:17.106440 2681 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:35:17.106501 kubelet[2681]: I0213 15:35:17.106473 2681 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:35:17.721071 kubelet[2681]: I0213 15:35:17.720802 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kxsjw" podStartSLOduration=31.59617193 podStartE2EDuration="46.720787236s" podCreationTimestamp="2025-02-13 15:34:31 +0000 UTC" firstStartedPulling="2025-02-13 15:35:01.748553469 +0000 UTC m=+55.786486335" lastFinishedPulling="2025-02-13 15:35:16.873168775 +0000 UTC m=+70.911101641" observedRunningTime="2025-02-13 15:35:17.720448899 +0000 UTC m=+71.758381765" watchObservedRunningTime="2025-02-13 15:35:17.720787236 +0000 UTC m=+71.758720102" Feb 13 15:35:21.796456 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:35290.service - OpenSSH per-connection server daemon (10.0.0.1:35290). Feb 13 15:35:21.841445 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 35290 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:21.843212 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:21.846883 systemd-logind[1449]: New session 20 of user core. Feb 13 15:35:21.860605 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:35:21.971709 sshd[5537]: Connection closed by 10.0.0.1 port 35290 Feb 13 15:35:21.972276 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:21.985053 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:35290.service: Deactivated successfully. Feb 13 15:35:21.987256 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:35:21.988814 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:35:21.999879 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:35292.service - OpenSSH per-connection server daemon (10.0.0.1:35292). Feb 13 15:35:22.000909 systemd-logind[1449]: Removed session 20. Feb 13 15:35:22.038750 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 35292 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:22.040274 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:22.044630 systemd-logind[1449]: New session 21 of user core. Feb 13 15:35:22.054658 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:35:22.304066 sshd[5551]: Connection closed by 10.0.0.1 port 35292 Feb 13 15:35:22.304536 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:22.314415 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:35292.service: Deactivated successfully. Feb 13 15:35:22.316239 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:35:22.317627 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:35:22.326723 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:35304.service - OpenSSH per-connection server daemon (10.0.0.1:35304). Feb 13 15:35:22.327605 systemd-logind[1449]: Removed session 21. Feb 13 15:35:22.364811 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 35304 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:22.366098 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:22.369838 systemd-logind[1449]: New session 22 of user core. Feb 13 15:35:22.379591 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:35:23.977162 sshd[5563]: Connection closed by 10.0.0.1 port 35304 Feb 13 15:35:23.977735 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:23.985543 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:35304.service: Deactivated successfully. Feb 13 15:35:23.988033 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:35:23.989730 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:35:23.997830 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:35314.service - OpenSSH per-connection server daemon (10.0.0.1:35314). Feb 13 15:35:24.000584 systemd-logind[1449]: Removed session 22. Feb 13 15:35:24.041403 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 35314 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:24.043106 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:24.047031 systemd-logind[1449]: New session 23 of user core. Feb 13 15:35:24.054597 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:35:24.322525 sshd[5612]: Connection closed by 10.0.0.1 port 35314 Feb 13 15:35:24.323932 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:24.338424 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:35314.service: Deactivated successfully. Feb 13 15:35:24.340862 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:35:24.342694 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:35:24.348771 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:35328.service - OpenSSH per-connection server daemon (10.0.0.1:35328). Feb 13 15:35:24.350413 systemd-logind[1449]: Removed session 23. Feb 13 15:35:24.387547 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 35328 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:24.389091 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:24.393075 systemd-logind[1449]: New session 24 of user core. Feb 13 15:35:24.402605 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:35:24.509083 sshd[5626]: Connection closed by 10.0.0.1 port 35328 Feb 13 15:35:24.509457 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:24.513427 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:35328.service: Deactivated successfully. Feb 13 15:35:24.515579 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:35:24.516203 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:35:24.517117 systemd-logind[1449]: Removed session 24. Feb 13 15:35:29.393907 kubelet[2681]: I0213 15:35:29.393858 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:35:29.522602 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:59080.service - OpenSSH per-connection server daemon (10.0.0.1:59080). Feb 13 15:35:29.578396 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 59080 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:29.579778 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:29.583770 systemd-logind[1449]: New session 25 of user core. Feb 13 15:35:29.590605 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:35:29.713221 sshd[5640]: Connection closed by 10.0.0.1 port 59080 Feb 13 15:35:29.714405 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:29.718882 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:59080.service: Deactivated successfully. Feb 13 15:35:29.721059 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:35:29.721811 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:35:29.722815 systemd-logind[1449]: Removed session 25. Feb 13 15:35:34.724562 systemd[1]: Started sshd@25-10.0.0.118:22-10.0.0.1:59096.service - OpenSSH per-connection server daemon (10.0.0.1:59096). Feb 13 15:35:34.770678 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 59096 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:34.772224 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:34.775850 systemd-logind[1449]: New session 26 of user core. Feb 13 15:35:34.783599 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:35:34.903328 sshd[5662]: Connection closed by 10.0.0.1 port 59096 Feb 13 15:35:34.903679 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:34.907854 systemd[1]: sshd@25-10.0.0.118:22-10.0.0.1:59096.service: Deactivated successfully. Feb 13 15:35:34.910000 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:35:34.910628 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:35:34.911498 systemd-logind[1449]: Removed session 26. Feb 13 15:35:36.031833 kubelet[2681]: E0213 15:35:36.031792 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:36.032720 kubelet[2681]: E0213 15:35:36.031877 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:38.031709 kubelet[2681]: E0213 15:35:38.031651 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:39.916072 systemd[1]: Started sshd@26-10.0.0.118:22-10.0.0.1:43346.service - OpenSSH per-connection server daemon (10.0.0.1:43346). Feb 13 15:35:40.051999 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 43346 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:40.053758 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:40.057781 systemd-logind[1449]: New session 27 of user core. Feb 13 15:35:40.063594 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:35:40.168013 sshd[5699]: Connection closed by 10.0.0.1 port 43346 Feb 13 15:35:40.168269 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:40.171735 systemd[1]: sshd@26-10.0.0.118:22-10.0.0.1:43346.service: Deactivated successfully. Feb 13 15:35:40.173559 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:35:40.174211 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:35:40.175158 systemd-logind[1449]: Removed session 27. Feb 13 15:35:45.183450 systemd[1]: Started sshd@27-10.0.0.118:22-10.0.0.1:43350.service - OpenSSH per-connection server daemon (10.0.0.1:43350). Feb 13 15:35:45.225168 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 43350 ssh2: RSA SHA256:CjBnnOu2nrbFyXIVJoKq+2bOe/qWKJpdmfPZgw4OlSw Feb 13 15:35:45.227043 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:45.231002 systemd-logind[1449]: New session 28 of user core. Feb 13 15:35:45.240709 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:35:45.345831 sshd[5742]: Connection closed by 10.0.0.1 port 43350 Feb 13 15:35:45.346211 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:45.349908 systemd[1]: sshd@27-10.0.0.118:22-10.0.0.1:43350.service: Deactivated successfully. Feb 13 15:35:45.351915 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:35:45.352622 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:35:45.353456 systemd-logind[1449]: Removed session 28. Feb 13 15:35:47.145504 kubelet[2681]: I0213 15:35:47.145450 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"