Feb 13 19:49:29.895436 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 19:49:29.895457 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:49:29.895469 kernel: BIOS-provided physical RAM map: Feb 13 19:49:29.895477 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:49:29.895483 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:49:29.895491 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:49:29.895498 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:49:29.895505 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:49:29.895511 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:49:29.895533 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:49:29.895540 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:49:29.895546 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:49:29.895552 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:49:29.895559 kernel: NX (Execute Disable) protection: active Feb 13 19:49:29.895566 kernel: APIC: Static calls initialized Feb 13 19:49:29.895576 kernel: SMBIOS 2.8 present. Feb 13 19:49:29.895582 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:49:29.895589 kernel: Hypervisor detected: KVM Feb 13 19:49:29.895596 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:49:29.895603 kernel: kvm-clock: using sched offset of 2206241543 cycles Feb 13 19:49:29.895610 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:49:29.895617 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:49:29.895624 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:49:29.895631 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:49:29.895638 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:49:29.895647 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:49:29.895654 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:49:29.895661 kernel: Using GB pages for direct mapping Feb 13 19:49:29.895668 kernel: ACPI: Early table checksum verification disabled Feb 13 19:49:29.895675 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:49:29.895681 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:49:29.895688 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:49:29.895695 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:49:29.895704 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:49:29.895711 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:49:29.895718 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:49:29.895724 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:49:29.895807 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:49:29.895814 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:49:29.895822 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:49:29.895833 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:49:29.895842 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:49:29.895850 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:49:29.895857 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:49:29.895864 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:49:29.895871 kernel: No NUMA configuration found Feb 13 19:49:29.895878 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:49:29.895885 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:49:29.895894 kernel: Zone ranges: Feb 13 19:49:29.895901 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:49:29.895909 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:49:29.895916 kernel: Normal empty Feb 13 19:49:29.895923 kernel: Movable zone start for each node Feb 13 19:49:29.895930 kernel: Early memory node ranges Feb 13 19:49:29.895937 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:49:29.895944 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:49:29.895951 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:49:29.895960 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:49:29.895967 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:49:29.895974 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:49:29.895981 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:49:29.895988 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:49:29.895996 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:49:29.896003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:49:29.896010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:49:29.896017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:49:29.896026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:49:29.896033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:49:29.896040 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:49:29.896047 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:49:29.896054 kernel: TSC deadline timer available Feb 13 19:49:29.896061 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:49:29.896068 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:49:29.896075 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:49:29.896083 kernel: kvm-guest: setup PV sched yield Feb 13 19:49:29.896092 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:49:29.896099 kernel: Booting paravirtualized kernel on KVM Feb 13 19:49:29.896106 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:49:29.896113 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:49:29.896121 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:49:29.896128 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:49:29.896135 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:49:29.896142 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:49:29.896149 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:49:29.896159 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:49:29.896167 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:49:29.896174 kernel: random: crng init done Feb 13 19:49:29.896181 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:49:29.896188 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:49:29.896195 kernel: Fallback order for Node 0: 0 Feb 13 19:49:29.896202 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:49:29.896209 kernel: Policy zone: DMA32 Feb 13 19:49:29.896216 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:49:29.896226 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 136900K reserved, 0K cma-reserved) Feb 13 19:49:29.896233 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:49:29.896240 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 19:49:29.896247 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:49:29.896254 kernel: Dynamic Preempt: voluntary Feb 13 19:49:29.896261 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:49:29.896269 kernel: rcu: RCU event tracing is enabled. Feb 13 19:49:29.896276 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:49:29.896284 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:49:29.896293 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:49:29.896300 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:49:29.896307 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:49:29.896314 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:49:29.896321 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:49:29.896345 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:49:29.896352 kernel: Console: colour VGA+ 80x25 Feb 13 19:49:29.896359 kernel: printk: console [ttyS0] enabled Feb 13 19:49:29.896367 kernel: ACPI: Core revision 20230628 Feb 13 19:49:29.896380 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:49:29.896389 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:49:29.896397 kernel: x2apic enabled Feb 13 19:49:29.896404 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:49:29.896411 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:49:29.896418 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:49:29.896426 kernel: kvm-guest: setup PV IPIs Feb 13 19:49:29.896442 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:49:29.896450 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:49:29.896457 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:49:29.896465 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:49:29.896472 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:49:29.896482 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:49:29.896490 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:49:29.896499 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:49:29.896507 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:49:29.896539 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:49:29.896546 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:49:29.896554 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:49:29.896561 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:49:29.896569 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:49:29.896576 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:49:29.896584 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:49:29.896592 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:49:29.896600 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:49:29.896610 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:49:29.896617 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:49:29.896624 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:49:29.896632 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:49:29.896640 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:49:29.896647 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:49:29.896654 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:49:29.896662 kernel: landlock: Up and running. Feb 13 19:49:29.896669 kernel: SELinux: Initializing. Feb 13 19:49:29.896679 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:49:29.896686 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:49:29.896694 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:49:29.896702 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:49:29.896709 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:49:29.896717 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:49:29.896724 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:49:29.896732 kernel: ... version: 0 Feb 13 19:49:29.896742 kernel: ... bit width: 48 Feb 13 19:49:29.896749 kernel: ... generic registers: 6 Feb 13 19:49:29.896757 kernel: ... value mask: 0000ffffffffffff Feb 13 19:49:29.896767 kernel: ... max period: 00007fffffffffff Feb 13 19:49:29.896775 kernel: ... fixed-purpose events: 0 Feb 13 19:49:29.896782 kernel: ... event mask: 000000000000003f Feb 13 19:49:29.896790 kernel: signal: max sigframe size: 1776 Feb 13 19:49:29.896804 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:49:29.896812 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:49:29.896819 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:49:29.896829 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:49:29.896837 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:49:29.896844 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:49:29.896851 kernel: smpboot: Max logical packages: 1 Feb 13 19:49:29.896860 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:49:29.896867 kernel: devtmpfs: initialized Feb 13 19:49:29.896874 kernel: x86/mm: Memory block size: 128MB Feb 13 19:49:29.896882 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:49:29.896889 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:49:29.896899 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:49:29.896907 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:49:29.896914 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:49:29.896922 kernel: audit: type=2000 audit(1739476169.425:1): state=initialized audit_enabled=0 res=1 Feb 13 19:49:29.896929 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:49:29.896936 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:49:29.896944 kernel: cpuidle: using governor menu Feb 13 19:49:29.896951 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:49:29.896958 kernel: dca service started, version 1.12.1 Feb 13 19:49:29.896968 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:49:29.896976 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:49:29.896983 kernel: PCI: Using configuration type 1 for base access Feb 13 19:49:29.896991 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:49:29.896998 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:49:29.897006 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:49:29.897013 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:49:29.897021 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:49:29.897028 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:49:29.897038 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:49:29.897045 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:49:29.897053 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:49:29.897060 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:49:29.897067 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:49:29.897075 kernel: ACPI: Interpreter enabled Feb 13 19:49:29.897082 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:49:29.897089 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:49:29.897097 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:49:29.897107 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:49:29.897114 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:49:29.897122 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:49:29.897297 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:49:29.897444 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:49:29.897624 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:49:29.897636 kernel: PCI host bridge to bus 0000:00 Feb 13 19:49:29.897763 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:49:29.897884 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:49:29.897994 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:49:29.898102 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:49:29.898211 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:49:29.898320 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:49:29.898429 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:49:29.898583 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:49:29.898712 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:49:29.898862 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:49:29.899078 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:49:29.899289 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:49:29.899415 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:49:29.899646 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:49:29.899834 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:49:29.900002 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:49:29.900155 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:49:29.900320 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:49:29.900500 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:49:29.900730 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:49:29.900900 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:49:29.901065 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:49:29.901224 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:49:29.901386 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:49:29.901648 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:49:29.901817 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:49:29.901985 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:49:29.902173 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:49:29.902346 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:49:29.902508 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:49:29.902693 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:49:29.902879 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:49:29.903033 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:49:29.903053 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:49:29.903065 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:49:29.903076 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:49:29.903086 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:49:29.903097 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:49:29.903108 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:49:29.903118 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:49:29.903129 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:49:29.903140 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:49:29.903151 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:49:29.903164 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:49:29.903175 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:49:29.903187 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:49:29.903198 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:49:29.903209 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:49:29.903220 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:49:29.903231 kernel: iommu: Default domain type: Translated Feb 13 19:49:29.903242 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:49:29.903253 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:49:29.903268 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:49:29.903280 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:49:29.903291 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:49:29.903458 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:49:29.903747 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:49:29.903929 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:49:29.903947 kernel: vgaarb: loaded Feb 13 19:49:29.903960 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:49:29.903977 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:49:29.903989 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:49:29.904000 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:49:29.904013 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:49:29.904025 kernel: pnp: PnP ACPI init Feb 13 19:49:29.904197 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:49:29.904216 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:49:29.904228 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:49:29.904244 kernel: NET: Registered PF_INET protocol family Feb 13 19:49:29.904256 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:49:29.904268 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:49:29.904280 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:49:29.904292 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:49:29.904305 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:49:29.904317 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:49:29.904329 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:49:29.904341 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:49:29.904357 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:49:29.904369 kernel: NET: Registered PF_XDP protocol family Feb 13 19:49:29.904542 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:49:29.904694 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:49:29.904851 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:49:29.904996 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:49:29.905169 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:49:29.905311 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:49:29.905331 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:49:29.905342 kernel: Initialise system trusted keyrings Feb 13 19:49:29.905352 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:49:29.905362 kernel: Key type asymmetric registered Feb 13 19:49:29.905373 kernel: Asymmetric key parser 'x509' registered Feb 13 19:49:29.905384 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:49:29.905395 kernel: io scheduler mq-deadline registered Feb 13 19:49:29.905406 kernel: io scheduler kyber registered Feb 13 19:49:29.905416 kernel: io scheduler bfq registered Feb 13 19:49:29.905430 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:49:29.905442 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:49:29.905452 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:49:29.905461 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:49:29.905472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:49:29.905483 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:49:29.905496 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:49:29.905510 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:49:29.905551 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:49:29.905733 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:49:29.905900 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:49:29.905917 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:49:29.906065 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:49:29 UTC (1739476169) Feb 13 19:49:29.906218 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:49:29.906235 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:49:29.906246 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:49:29.906257 kernel: Segment Routing with IPv6 Feb 13 19:49:29.906272 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:49:29.906283 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:49:29.906294 kernel: Key type dns_resolver registered Feb 13 19:49:29.906305 kernel: IPI shorthand broadcast: enabled Feb 13 19:49:29.906315 kernel: sched_clock: Marking stable (664003371, 115321122)->(799150527, -19826034) Feb 13 19:49:29.906327 kernel: registered taskstats version 1 Feb 13 19:49:29.906338 kernel: Loading compiled-in X.509 certificates Feb 13 19:49:29.906349 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 19:49:29.906359 kernel: Key type .fscrypt registered Feb 13 19:49:29.906374 kernel: Key type fscrypt-provisioning registered Feb 13 19:49:29.906385 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:49:29.906396 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:49:29.906406 kernel: ima: No architecture policies found Feb 13 19:49:29.906417 kernel: clk: Disabling unused clocks Feb 13 19:49:29.906427 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 19:49:29.906438 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:49:29.906449 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 19:49:29.906460 kernel: Run /init as init process Feb 13 19:49:29.906475 kernel: with arguments: Feb 13 19:49:29.906486 kernel: /init Feb 13 19:49:29.906496 kernel: with environment: Feb 13 19:49:29.906506 kernel: HOME=/ Feb 13 19:49:29.906563 kernel: TERM=linux Feb 13 19:49:29.906577 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:49:29.906590 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:49:29.906604 systemd[1]: Detected virtualization kvm. Feb 13 19:49:29.906621 systemd[1]: Detected architecture x86-64. Feb 13 19:49:29.906632 systemd[1]: Running in initrd. Feb 13 19:49:29.906643 systemd[1]: No hostname configured, using default hostname. Feb 13 19:49:29.906654 systemd[1]: Hostname set to . Feb 13 19:49:29.906666 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:49:29.906678 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:49:29.906689 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:49:29.906701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:49:29.906718 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:49:29.906745 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:49:29.906760 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:49:29.906772 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:49:29.906787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:49:29.906812 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:49:29.906824 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:49:29.906836 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:49:29.906848 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:49:29.906860 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:49:29.906872 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:49:29.906884 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:49:29.906895 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:49:29.906911 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:49:29.906923 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:49:29.906939 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:49:29.906950 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:49:29.906962 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:49:29.906974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:49:29.906986 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:49:29.906998 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:49:29.907013 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:49:29.907025 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:49:29.907037 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:49:29.907049 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:49:29.907061 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:49:29.907073 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:29.907085 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:49:29.907097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:49:29.907108 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:49:29.907153 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 19:49:29.907187 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:49:29.907203 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:49:29.907216 systemd-journald[193]: Journal started Feb 13 19:49:29.907244 systemd-journald[193]: Runtime Journal (/run/log/journal/bbb1e1366d65462cb76b1fd6a3d77ba4) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:49:29.899370 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 19:49:29.939787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:49:29.939824 kernel: Bridge firewalling registered Feb 13 19:49:29.926293 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 19:49:29.942655 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:49:29.943081 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:49:29.945369 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:29.967814 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:29.971028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:29.973664 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:49:29.976901 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:49:29.985590 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:29.986570 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:49:29.987947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:29.989683 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:49:29.997476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:49:29.999360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:49:30.011125 dracut-cmdline[227]: dracut-dracut-053 Feb 13 19:49:30.014742 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:49:30.035101 systemd-resolved[230]: Positive Trust Anchors: Feb 13 19:49:30.035120 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:49:30.035151 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:49:30.037601 systemd-resolved[230]: Defaulting to hostname 'linux'. Feb 13 19:49:30.038744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:49:30.045875 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:49:30.117563 kernel: SCSI subsystem initialized Feb 13 19:49:30.127560 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:49:30.140587 kernel: iscsi: registered transport (tcp) Feb 13 19:49:30.161752 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:49:30.161830 kernel: QLogic iSCSI HBA Driver Feb 13 19:49:30.213110 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:49:30.224713 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:49:30.251573 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:49:30.251676 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:49:30.251689 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:49:30.295564 kernel: raid6: avx2x4 gen() 26063 MB/s Feb 13 19:49:30.312553 kernel: raid6: avx2x2 gen() 30164 MB/s Feb 13 19:49:30.329643 kernel: raid6: avx2x1 gen() 25860 MB/s Feb 13 19:49:30.329701 kernel: raid6: using algorithm avx2x2 gen() 30164 MB/s Feb 13 19:49:30.347844 kernel: raid6: .... xor() 19601 MB/s, rmw enabled Feb 13 19:49:30.347908 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:49:30.368560 kernel: xor: automatically using best checksumming function avx Feb 13 19:49:30.527564 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:49:30.539605 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:49:30.553790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:49:30.568034 systemd-udevd[413]: Using default interface naming scheme 'v255'. Feb 13 19:49:30.572808 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:49:30.582728 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:49:30.598803 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Feb 13 19:49:30.636853 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:49:30.648860 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:49:30.722604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:49:30.731699 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:49:30.745099 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:49:30.746909 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:49:30.750711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:49:30.753354 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:49:30.761547 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:49:30.761763 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:49:30.773091 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:49:30.786058 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:49:30.786073 kernel: AES CTR mode by8 optimization enabled Feb 13 19:49:30.786083 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:49:30.786235 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:49:30.786247 kernel: GPT:9289727 != 19775487 Feb 13 19:49:30.786257 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:49:30.786268 kernel: GPT:9289727 != 19775487 Feb 13 19:49:30.786277 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:49:30.786287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:49:30.776249 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:49:30.798545 kernel: libata version 3.00 loaded. Feb 13 19:49:30.803380 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:49:30.803510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:30.808639 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:30.814038 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:49:30.839120 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:49:30.839137 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Feb 13 19:49:30.839149 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (481) Feb 13 19:49:30.839160 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:49:30.839324 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:49:30.839462 kernel: scsi host0: ahci Feb 13 19:49:30.839716 kernel: scsi host1: ahci Feb 13 19:49:30.839871 kernel: scsi host2: ahci Feb 13 19:49:30.840012 kernel: scsi host3: ahci Feb 13 19:49:30.840163 kernel: scsi host4: ahci Feb 13 19:49:30.840316 kernel: scsi host5: ahci Feb 13 19:49:30.840466 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:49:30.840478 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:49:30.840488 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:49:30.840498 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:49:30.840509 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:49:30.840540 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:49:30.811761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:49:30.811958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:30.814937 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:30.821718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:30.833839 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:49:30.849056 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:49:30.875000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:30.886569 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:49:30.892732 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:49:30.894057 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:49:30.909684 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:49:30.912658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:30.919841 disk-uuid[565]: Primary Header is updated. Feb 13 19:49:30.919841 disk-uuid[565]: Secondary Entries is updated. Feb 13 19:49:30.919841 disk-uuid[565]: Secondary Header is updated. Feb 13 19:49:30.923642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:49:30.927548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:49:30.931264 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:31.148635 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:49:31.148720 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:49:31.148747 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:49:31.148759 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:49:31.150553 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:49:31.150586 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:49:31.151552 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:49:31.152940 kernel: ata3.00: applying bridge limits Feb 13 19:49:31.152962 kernel: ata3.00: configured for UDMA/100 Feb 13 19:49:31.153555 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:49:31.202550 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:49:31.216401 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:49:31.216425 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:49:31.929540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:49:31.929703 disk-uuid[569]: The operation has completed successfully. Feb 13 19:49:31.956678 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:49:31.956807 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:49:31.989700 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:49:31.993030 sh[593]: Success Feb 13 19:49:32.005997 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:49:32.044275 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:49:32.060029 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:49:32.062941 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:49:32.077879 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 19:49:32.077908 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:49:32.077920 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:49:32.078898 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:49:32.079633 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:49:32.084014 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:49:32.086305 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:49:32.094675 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:49:32.096984 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:49:32.106664 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:49:32.106684 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:49:32.106694 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:49:32.110536 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:49:32.119243 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:49:32.121006 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:49:32.130359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:49:32.137727 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:49:32.197745 ignition[683]: Ignition 2.19.0 Feb 13 19:49:32.198863 ignition[683]: Stage: fetch-offline Feb 13 19:49:32.198910 ignition[683]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:32.198922 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:49:32.199015 ignition[683]: parsed url from cmdline: "" Feb 13 19:49:32.199019 ignition[683]: no config URL provided Feb 13 19:49:32.199025 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:49:32.199033 ignition[683]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:49:32.199061 ignition[683]: op(1): [started] loading QEMU firmware config module Feb 13 19:49:32.199066 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:49:32.205707 ignition[683]: op(1): [finished] loading QEMU firmware config module Feb 13 19:49:32.233810 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:49:32.247763 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:49:32.251650 ignition[683]: parsing config with SHA512: b14089526c3393a0264e7c298b5c485120a8bc40c65cf032f649c2d22bf0556e51f8cf9092009999acdcb42ee787b7965717bcb885931d113bf75bd755a88754 Feb 13 19:49:32.255706 unknown[683]: fetched base config from "system" Feb 13 19:49:32.255720 unknown[683]: fetched user config from "qemu" Feb 13 19:49:32.256603 ignition[683]: fetch-offline: fetch-offline passed Feb 13 19:49:32.256692 ignition[683]: Ignition finished successfully Feb 13 19:49:32.259068 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:49:32.278337 systemd-networkd[782]: lo: Link UP Feb 13 19:49:32.278348 systemd-networkd[782]: lo: Gained carrier Feb 13 19:49:32.281366 systemd-networkd[782]: Enumeration completed Feb 13 19:49:32.281477 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:49:32.282009 systemd[1]: Reached target network.target - Network. Feb 13 19:49:32.282252 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:49:32.287838 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:49:32.287842 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:49:32.288890 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:49:32.288967 systemd-networkd[782]: eth0: Link UP Feb 13 19:49:32.288971 systemd-networkd[782]: eth0: Gained carrier Feb 13 19:49:32.288978 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:49:32.305759 ignition[786]: Ignition 2.19.0 Feb 13 19:49:32.305771 ignition[786]: Stage: kargs Feb 13 19:49:32.305999 ignition[786]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:32.306013 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:49:32.307587 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:49:32.309404 ignition[786]: kargs: kargs passed Feb 13 19:49:32.309457 ignition[786]: Ignition finished successfully Feb 13 19:49:32.314134 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:49:32.322773 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:49:32.339107 ignition[795]: Ignition 2.19.0 Feb 13 19:49:32.339120 ignition[795]: Stage: disks Feb 13 19:49:32.339302 ignition[795]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:32.339321 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:49:32.343937 ignition[795]: disks: disks passed Feb 13 19:49:32.344004 ignition[795]: Ignition finished successfully Feb 13 19:49:32.347606 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:49:32.349902 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:49:32.350381 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:49:32.350949 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:49:32.351314 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:49:32.352063 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:49:32.369747 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:49:32.387445 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:49:32.395265 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:49:32.405776 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:49:32.496565 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 19:49:32.496570 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:49:32.497619 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:49:32.510716 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:49:32.512896 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:49:32.513400 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:49:32.513439 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:49:32.521763 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Feb 13 19:49:32.513461 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:49:32.525649 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:49:32.525669 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:49:32.525680 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:49:32.527541 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:49:32.538599 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:49:32.543946 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:49:32.545162 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:49:32.586940 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:49:32.591056 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:49:32.596506 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:49:32.601318 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:49:32.656416 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.13 Feb 13 19:49:32.656440 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 13 19:49:32.691265 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:49:32.702666 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:49:32.704740 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:49:32.712554 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:49:32.729698 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:49:32.734374 ignition[929]: INFO : Ignition 2.19.0 Feb 13 19:49:32.734374 ignition[929]: INFO : Stage: mount Feb 13 19:49:32.736463 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:32.736463 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:49:32.736463 ignition[929]: INFO : mount: mount passed Feb 13 19:49:32.736463 ignition[929]: INFO : Ignition finished successfully Feb 13 19:49:32.737974 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:49:32.750636 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:49:33.077281 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:49:33.090874 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:49:33.099222 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Feb 13 19:49:33.099259 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:49:33.099271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:49:33.100823 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:49:33.103549 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:49:33.105093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:49:33.127795 ignition[961]: INFO : Ignition 2.19.0 Feb 13 19:49:33.127795 ignition[961]: INFO : Stage: files Feb 13 19:49:33.130052 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:33.130052 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:49:33.130052 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:49:33.133999 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:49:33.133999 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:49:33.137395 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:49:33.137395 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:49:33.137395 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:49:33.137395 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:49:33.137395 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:49:33.135026 unknown[961]: wrote ssh authorized keys file for user: core Feb 13 19:49:33.180222 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:49:33.298661 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:49:33.301192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:49:33.811303 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:49:34.232454 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:49:34.232454 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:49:34.237064 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:49:34.237064 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:49:34.237064 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:49:34.237064 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:49:34.237064 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:49:34.237064 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:49:34.237064 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:49:34.237064 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:49:34.238157 systemd-networkd[782]: eth0: Gained IPv6LL Feb 13 19:49:34.258559 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:49:34.264722 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:49:34.266262 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:49:34.266262 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:49:34.266262 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:49:34.266262 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:49:34.266262 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:49:34.266262 ignition[961]: INFO : files: files passed Feb 13 19:49:34.266262 ignition[961]: INFO : Ignition finished successfully Feb 13 19:49:34.267727 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:49:34.277875 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:49:34.280809 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:49:34.283617 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:49:34.283786 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:49:34.308003 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:49:34.310861 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:49:34.310861 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:49:34.314508 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:49:34.317881 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:49:34.320886 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:49:34.331876 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:49:34.364827 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:49:34.365007 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:49:34.365999 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:49:34.368903 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:49:34.369269 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:49:34.379808 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:49:34.402799 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:49:34.416770 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:49:34.427475 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:49:34.428960 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:49:34.431346 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:49:34.433555 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:49:34.433702 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:49:34.435865 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:49:34.437656 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:49:34.439755 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:49:34.441844 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:49:34.444024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:49:34.446293 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:49:34.448475 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:49:34.450833 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:49:34.452827 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:49:34.455019 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:49:34.456868 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:49:34.457052 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:49:34.459245 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:49:34.460920 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:49:34.463012 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:49:34.463141 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:49:34.465238 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:49:34.465366 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:49:34.467923 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:49:34.468073 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:49:34.470005 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:49:34.471873 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:49:34.475821 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:49:34.478263 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:49:34.480292 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:49:34.482201 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:49:34.482342 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:49:34.484186 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:49:34.484283 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:49:34.486641 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:49:34.486792 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:49:34.488689 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:49:34.488800 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:49:34.502799 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:49:34.503809 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:49:34.503941 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:49:34.506817 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:49:34.507810 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:49:34.507961 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:49:34.510419 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:49:34.510723 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:49:34.516532 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:49:34.516654 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:49:34.521790 ignition[1015]: INFO : Ignition 2.19.0 Feb 13 19:49:34.521790 ignition[1015]: INFO : Stage: umount Feb 13 19:49:34.521790 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:34.521790 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:49:34.521790 ignition[1015]: INFO : umount: umount passed Feb 13 19:49:34.521790 ignition[1015]: INFO : Ignition finished successfully Feb 13 19:49:34.523756 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:49:34.523886 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:49:34.525861 systemd[1]: Stopped target network.target - Network. Feb 13 19:49:34.527756 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:49:34.527811 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:49:34.529546 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:49:34.529593 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:49:34.531423 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:49:34.531469 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:49:34.536593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:49:34.536645 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:49:34.538792 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:49:34.540830 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:49:34.543697 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:49:34.547558 systemd-networkd[782]: eth0: DHCPv6 lease lost Feb 13 19:49:34.551541 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:49:34.566648 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:49:34.570382 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:49:34.571651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:49:34.575270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:49:34.575336 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:49:34.588743 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:49:34.589508 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:49:34.589638 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:49:34.592239 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:49:34.592300 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:34.592898 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:49:34.592952 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:49:34.593217 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:49:34.593267 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:49:34.593797 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:49:34.605446 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:49:34.605608 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:49:34.623487 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:49:34.623753 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:49:34.624552 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:49:34.624612 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:49:34.627230 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:49:34.627276 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:49:34.629379 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:49:34.629441 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:49:34.630248 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:49:34.630301 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:49:34.635402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:49:34.635468 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:34.648900 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:49:34.651170 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:49:34.651278 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:49:34.652042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:49:34.652107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:34.693188 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:49:34.693366 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:49:34.924713 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:49:34.925829 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:49:34.927945 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:49:34.930121 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:49:34.930188 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:49:34.945814 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:49:34.957369 systemd[1]: Switching root. Feb 13 19:49:34.997133 systemd-journald[193]: Journal stopped Feb 13 19:49:36.313562 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 19:49:36.313646 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:49:36.313667 kernel: SELinux: policy capability open_perms=1 Feb 13 19:49:36.313681 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:49:36.313695 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:49:36.313715 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:49:36.313730 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:49:36.313744 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:49:36.313758 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:49:36.313772 kernel: audit: type=1403 audit(1739476175.521:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:49:36.313795 systemd[1]: Successfully loaded SELinux policy in 45.086ms. Feb 13 19:49:36.313825 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.007ms. Feb 13 19:49:36.313880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:49:36.313898 systemd[1]: Detected virtualization kvm. Feb 13 19:49:36.313919 systemd[1]: Detected architecture x86-64. Feb 13 19:49:36.313939 systemd[1]: Detected first boot. Feb 13 19:49:36.313956 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:49:36.313971 zram_generator::config[1060]: No configuration found. Feb 13 19:49:36.313995 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:49:36.314011 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:49:36.314027 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:49:36.314044 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:49:36.314064 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:49:36.314081 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:49:36.314097 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:49:36.314115 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:49:36.314132 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:49:36.314149 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:49:36.314166 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:49:36.314183 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:49:36.314202 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:49:36.314219 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:49:36.314236 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:49:36.314929 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:49:36.314969 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:49:36.314988 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:49:36.315005 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:49:36.315021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:49:36.315037 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:49:36.315060 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:49:36.315074 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:49:36.315090 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:49:36.315105 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:49:36.315123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:49:36.315138 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:49:36.315153 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:49:36.315178 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:49:36.315197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:49:36.315214 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:49:36.315230 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:49:36.315247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:49:36.315263 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:49:36.315279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:49:36.315295 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:49:36.315311 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:49:36.315327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:36.315348 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:49:36.315365 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:49:36.315381 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:49:36.315398 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:49:36.315414 systemd[1]: Reached target machines.target - Containers. Feb 13 19:49:36.315430 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:49:36.315446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:49:36.315462 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:49:36.315480 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:49:36.315500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:49:36.315535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:49:36.315553 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:49:36.315570 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:49:36.315586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:49:36.315603 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:49:36.315629 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:49:36.315646 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:49:36.315666 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:49:36.315683 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:49:36.315699 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:49:36.315715 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:49:36.315731 kernel: fuse: init (API version 7.39) Feb 13 19:49:36.315784 systemd-journald[1123]: Collecting audit messages is disabled. Feb 13 19:49:36.315817 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:49:36.315834 systemd-journald[1123]: Journal started Feb 13 19:49:36.316100 systemd-journald[1123]: Runtime Journal (/run/log/journal/bbb1e1366d65462cb76b1fd6a3d77ba4) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:49:36.079988 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:49:36.317604 kernel: loop: module loaded Feb 13 19:49:36.097436 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:49:36.097983 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:49:36.320751 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:49:36.358904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:49:36.361539 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:49:36.361579 kernel: ACPI: bus type drm_connector registered Feb 13 19:49:36.361610 systemd[1]: Stopped verity-setup.service. Feb 13 19:49:36.366303 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:36.368559 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:49:36.370452 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:49:36.371809 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:49:36.373210 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:49:36.374469 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:49:36.396022 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:49:36.397437 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:49:36.398842 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:49:36.400677 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:49:36.400851 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:49:36.402530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:49:36.402706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:49:36.404339 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:49:36.404512 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:49:36.406179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:49:36.406366 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:49:36.408105 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:49:36.408273 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:49:36.409856 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:49:36.410022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:49:36.411595 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:49:36.413202 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:49:36.415003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:49:36.432080 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:49:36.443745 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:49:36.446900 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:49:36.448194 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:49:36.448243 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:49:36.450712 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:49:36.453545 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:49:36.457163 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:49:36.458841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:49:36.460639 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:49:36.465278 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:49:36.467472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:49:36.471686 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:49:36.487758 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:49:36.489896 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:36.503227 systemd-journald[1123]: Time spent on flushing to /var/log/journal/bbb1e1366d65462cb76b1fd6a3d77ba4 is 35.029ms for 949 entries. Feb 13 19:49:36.503227 systemd-journald[1123]: System Journal (/var/log/journal/bbb1e1366d65462cb76b1fd6a3d77ba4) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:49:36.618911 systemd-journald[1123]: Received client request to flush runtime journal. Feb 13 19:49:36.618968 kernel: loop0: detected capacity change from 0 to 218376 Feb 13 19:49:36.618993 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:49:36.494804 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:49:36.499943 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:49:36.501505 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:49:36.504822 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:49:36.544769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:36.546909 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:49:36.549597 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:49:36.561786 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:49:36.578957 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:49:36.587709 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:49:36.598754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:49:36.607680 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:49:36.619273 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:49:36.621419 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:49:36.623892 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:49:36.626442 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:49:36.640037 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:49:36.644579 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 19:49:36.649819 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:49:36.674003 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Feb 13 19:49:36.674021 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Feb 13 19:49:36.681352 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:49:36.682545 kernel: loop2: detected capacity change from 0 to 142488 Feb 13 19:49:36.716552 kernel: loop3: detected capacity change from 0 to 218376 Feb 13 19:49:36.724970 kernel: loop4: detected capacity change from 0 to 140768 Feb 13 19:49:36.734551 kernel: loop5: detected capacity change from 0 to 142488 Feb 13 19:49:36.745589 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:49:36.746292 (sd-merge)[1199]: Merged extensions into '/usr'. Feb 13 19:49:36.752177 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:49:36.752193 systemd[1]: Reloading... Feb 13 19:49:36.825555 zram_generator::config[1228]: No configuration found. Feb 13 19:49:36.912511 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:49:36.963199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:37.025945 systemd[1]: Reloading finished in 273 ms. Feb 13 19:49:37.062057 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:49:37.064022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:49:37.090944 systemd[1]: Starting ensure-sysext.service... Feb 13 19:49:37.094150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:49:37.099473 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:49:37.099486 systemd[1]: Reloading... Feb 13 19:49:37.121858 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:49:37.122324 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:49:37.123599 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:49:37.123995 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Feb 13 19:49:37.124090 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Feb 13 19:49:37.131970 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:49:37.132109 systemd-tmpfiles[1263]: Skipping /boot Feb 13 19:49:37.148014 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:49:37.148187 systemd-tmpfiles[1263]: Skipping /boot Feb 13 19:49:37.166545 zram_generator::config[1292]: No configuration found. Feb 13 19:49:37.270926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:37.321146 systemd[1]: Reloading finished in 221 ms. Feb 13 19:49:37.340910 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:49:37.353960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:49:37.362667 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:37.365287 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:49:37.367675 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:49:37.373733 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:49:37.377889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:49:37.381808 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:49:37.386776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:37.386957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:49:37.392622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:49:37.395878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:49:37.397373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:49:37.397898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:49:37.399894 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:49:37.401026 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:37.405740 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:37.405978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:49:37.406240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:49:37.406675 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:37.407443 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:49:37.408866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:49:37.413422 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Feb 13 19:49:37.413789 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:49:37.417409 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:49:37.419187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:49:37.419360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:49:37.421403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:49:37.421665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:49:37.429217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:37.429511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:49:37.439852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:49:37.442975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:49:37.448790 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:49:37.453293 augenrules[1360]: No rules Feb 13 19:49:37.451104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:49:37.452461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:49:37.454682 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:49:37.456664 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:37.457499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:49:37.460611 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:49:37.463813 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:37.466760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:49:37.466981 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:49:37.470385 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:49:37.471185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:49:37.475460 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:49:37.475770 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:49:37.480057 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:49:37.489356 systemd[1]: Finished ensure-sysext.service. Feb 13 19:49:37.498635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:49:37.498883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:49:37.501971 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:49:37.519388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:49:37.521601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:49:37.521724 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:49:37.532740 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:49:37.532928 systemd-resolved[1332]: Positive Trust Anchors: Feb 13 19:49:37.532938 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:49:37.532970 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:49:37.534238 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:49:37.534757 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:49:37.537680 systemd-resolved[1332]: Defaulting to hostname 'linux'. Feb 13 19:49:37.539394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:49:37.540963 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:49:37.555539 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1375) Feb 13 19:49:37.602921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:49:37.610443 systemd-networkd[1400]: lo: Link UP Feb 13 19:49:37.610680 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:49:37.610849 systemd-networkd[1400]: lo: Gained carrier Feb 13 19:49:37.612586 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:49:37.612714 systemd-networkd[1400]: Enumeration completed Feb 13 19:49:37.613119 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:49:37.613123 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:49:37.663370 systemd-networkd[1400]: eth0: Link UP Feb 13 19:49:37.663478 systemd-networkd[1400]: eth0: Gained carrier Feb 13 19:49:37.663651 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:49:37.664558 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:49:37.664621 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:49:37.669911 systemd[1]: Reached target network.target - Network. Feb 13 19:49:37.672461 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:49:37.671107 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:49:37.678843 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:49:37.679008 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:49:37.680728 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Feb 13 19:49:37.680898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:49:38.295940 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:49:38.295992 systemd-timesyncd[1401]: Initial clock synchronization to Thu 2025-02-13 19:49:38.295833 UTC. Feb 13 19:49:38.295995 systemd-resolved[1332]: Clock change detected. Flushing caches. Feb 13 19:49:38.305179 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:49:38.305301 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:49:38.307227 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:49:38.307430 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:49:38.331790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:38.335949 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:49:38.428370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:38.438815 kernel: kvm_amd: TSC scaling supported Feb 13 19:49:38.438889 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:49:38.438915 kernel: kvm_amd: Nested Paging enabled Feb 13 19:49:38.439946 kernel: kvm_amd: LBR virtualization supported Feb 13 19:49:38.439979 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:49:38.440992 kernel: kvm_amd: Virtual GIF supported Feb 13 19:49:38.461549 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:49:38.499071 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:49:38.512906 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:49:38.521389 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:49:38.552782 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:49:38.554429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:49:38.555638 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:49:38.556902 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:49:38.558284 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:49:38.559877 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:49:38.561194 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:49:38.562599 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:49:38.563996 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:49:38.564032 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:49:38.565036 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:49:38.566796 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:49:38.569733 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:49:38.577017 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:49:38.579686 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:49:38.581378 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:49:38.582622 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:49:38.583626 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:49:38.584115 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:49:38.584150 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:49:38.585327 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:49:38.587657 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:49:38.590448 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:49:38.590474 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:49:38.596701 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:49:38.598877 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:49:38.600603 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:49:38.605076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:49:38.605321 jq[1433]: false Feb 13 19:49:38.607780 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:49:38.615340 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:49:38.621751 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:49:38.621914 extend-filesystems[1434]: Found loop3 Feb 13 19:49:38.621991 dbus-daemon[1432]: [system] SELinux support is enabled Feb 13 19:49:38.623279 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:49:38.624629 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:49:38.625675 extend-filesystems[1434]: Found loop4 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found loop5 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found sr0 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found vda Feb 13 19:49:38.625675 extend-filesystems[1434]: Found vda1 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found vda2 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found vda3 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found usr Feb 13 19:49:38.625675 extend-filesystems[1434]: Found vda4 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found vda6 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found vda7 Feb 13 19:49:38.625675 extend-filesystems[1434]: Found vda9 Feb 13 19:49:38.651968 extend-filesystems[1434]: Checking size of /dev/vda9 Feb 13 19:49:38.651968 extend-filesystems[1434]: Resized partition /dev/vda9 Feb 13 19:49:38.659463 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1387) Feb 13 19:49:38.625760 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:49:38.631295 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:49:38.636382 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:49:38.659911 jq[1450]: true Feb 13 19:49:38.641934 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:49:38.655167 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:49:38.655465 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:49:38.655906 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:49:38.656142 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:49:38.661392 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:49:38.665372 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:49:38.666750 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:49:38.669575 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:49:38.692380 update_engine[1446]: I20250213 19:49:38.691754 1446 main.cc:92] Flatcar Update Engine starting Feb 13 19:49:38.695621 update_engine[1446]: I20250213 19:49:38.693306 1446 update_check_scheduler.cc:74] Next update check in 3m31s Feb 13 19:49:38.694022 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:49:38.696362 jq[1459]: true Feb 13 19:49:38.700935 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:49:38.716821 tar[1457]: linux-amd64/LICENSE Feb 13 19:49:38.725498 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:49:38.727168 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:49:38.727193 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:49:38.729098 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:49:38.729118 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:49:38.735914 tar[1457]: linux-amd64/helm Feb 13 19:49:38.737320 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:49:38.737320 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:49:38.737320 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:49:38.746453 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Feb 13 19:49:38.740852 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:49:38.746037 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:49:38.746615 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:49:38.751407 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:49:38.751437 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:49:38.753832 systemd-logind[1442]: New seat seat0. Feb 13 19:49:38.755533 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:49:38.770559 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:49:38.772179 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:49:38.776410 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:49:38.781092 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:49:38.912599 containerd[1460]: time="2025-02-13T19:49:38.912470784Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:49:38.937267 containerd[1460]: time="2025-02-13T19:49:38.937193589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939143 containerd[1460]: time="2025-02-13T19:49:38.939101006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939143 containerd[1460]: time="2025-02-13T19:49:38.939136893Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:49:38.939239 containerd[1460]: time="2025-02-13T19:49:38.939152562Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:49:38.939391 containerd[1460]: time="2025-02-13T19:49:38.939357887Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:49:38.939391 containerd[1460]: time="2025-02-13T19:49:38.939381612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939476 containerd[1460]: time="2025-02-13T19:49:38.939451012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939476 containerd[1460]: time="2025-02-13T19:49:38.939469647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939718 containerd[1460]: time="2025-02-13T19:49:38.939689349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939718 containerd[1460]: time="2025-02-13T19:49:38.939712632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939796 containerd[1460]: time="2025-02-13T19:49:38.939725637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939796 containerd[1460]: time="2025-02-13T19:49:38.939736758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:38.939857 containerd[1460]: time="2025-02-13T19:49:38.939832026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:38.940096 containerd[1460]: time="2025-02-13T19:49:38.940068400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:38.940219 containerd[1460]: time="2025-02-13T19:49:38.940191821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:38.940219 containerd[1460]: time="2025-02-13T19:49:38.940211798Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:49:38.940327 containerd[1460]: time="2025-02-13T19:49:38.940304542Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:49:38.940384 containerd[1460]: time="2025-02-13T19:49:38.940363974Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:49:38.947712 containerd[1460]: time="2025-02-13T19:49:38.947579453Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:49:38.947712 containerd[1460]: time="2025-02-13T19:49:38.947663731Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:49:38.947712 containerd[1460]: time="2025-02-13T19:49:38.947691122Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:49:38.947712 containerd[1460]: time="2025-02-13T19:49:38.947706461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:49:38.947879 containerd[1460]: time="2025-02-13T19:49:38.947724956Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:49:38.947950 containerd[1460]: time="2025-02-13T19:49:38.947927285Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:49:38.948225 containerd[1460]: time="2025-02-13T19:49:38.948201128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:49:38.948344 containerd[1460]: time="2025-02-13T19:49:38.948322165Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:49:38.948368 containerd[1460]: time="2025-02-13T19:49:38.948344257Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:49:38.948368 containerd[1460]: time="2025-02-13T19:49:38.948358073Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:49:38.948411 containerd[1460]: time="2025-02-13T19:49:38.948371658Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:49:38.948411 containerd[1460]: time="2025-02-13T19:49:38.948384693Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:49:38.948411 containerd[1460]: time="2025-02-13T19:49:38.948398248Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:49:38.948462 containerd[1460]: time="2025-02-13T19:49:38.948411874Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:49:38.948462 containerd[1460]: time="2025-02-13T19:49:38.948426411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:49:38.948462 containerd[1460]: time="2025-02-13T19:49:38.948438183Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:49:38.948462 containerd[1460]: time="2025-02-13T19:49:38.948450356Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:49:38.948462 containerd[1460]: time="2025-02-13T19:49:38.948462569Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:49:38.948575 containerd[1460]: time="2025-02-13T19:49:38.948492735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948575 containerd[1460]: time="2025-02-13T19:49:38.948506381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948575 containerd[1460]: time="2025-02-13T19:49:38.948532149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948575 containerd[1460]: time="2025-02-13T19:49:38.948544432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948575 containerd[1460]: time="2025-02-13T19:49:38.948562676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948670 containerd[1460]: time="2025-02-13T19:49:38.948578125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948670 containerd[1460]: time="2025-02-13T19:49:38.948591260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948670 containerd[1460]: time="2025-02-13T19:49:38.948603763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948670 containerd[1460]: time="2025-02-13T19:49:38.948617860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948670 containerd[1460]: time="2025-02-13T19:49:38.948632537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948670 containerd[1460]: time="2025-02-13T19:49:38.948643728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948670 containerd[1460]: time="2025-02-13T19:49:38.948654919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948670 containerd[1460]: time="2025-02-13T19:49:38.948666681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948806 containerd[1460]: time="2025-02-13T19:49:38.948681760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:49:38.948806 containerd[1460]: time="2025-02-13T19:49:38.948701126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948806 containerd[1460]: time="2025-02-13T19:49:38.948718539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948806 containerd[1460]: time="2025-02-13T19:49:38.948729880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:49:38.948806 containerd[1460]: time="2025-02-13T19:49:38.948776738Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:49:38.948806 containerd[1460]: time="2025-02-13T19:49:38.948796465Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:49:38.948806 containerd[1460]: time="2025-02-13T19:49:38.948807515Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:49:38.948928 containerd[1460]: time="2025-02-13T19:49:38.948819358Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:49:38.948928 containerd[1460]: time="2025-02-13T19:49:38.948828775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.948928 containerd[1460]: time="2025-02-13T19:49:38.948841078Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:49:38.948928 containerd[1460]: time="2025-02-13T19:49:38.948850756Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:49:38.948928 containerd[1460]: time="2025-02-13T19:49:38.948877667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:49:38.949176 containerd[1460]: time="2025-02-13T19:49:38.949116645Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:49:38.949176 containerd[1460]: time="2025-02-13T19:49:38.949173051Z" level=info msg="Connect containerd service" Feb 13 19:49:38.949339 containerd[1460]: time="2025-02-13T19:49:38.949204189Z" level=info msg="using legacy CRI server" Feb 13 19:49:38.949339 containerd[1460]: time="2025-02-13T19:49:38.949212214Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:49:38.949339 containerd[1460]: time="2025-02-13T19:49:38.949300680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:49:38.950242 containerd[1460]: time="2025-02-13T19:49:38.950201780Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950600157Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950654859Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950716314Z" level=info msg="Start subscribing containerd event" Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950756329Z" level=info msg="Start recovering state" Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950820730Z" level=info msg="Start event monitor" Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950838483Z" level=info msg="Start snapshots syncer" Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950848462Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950860545Z" level=info msg="Start streaming server" Feb 13 19:49:38.951143 containerd[1460]: time="2025-02-13T19:49:38.950915558Z" level=info msg="containerd successfully booted in 0.039541s" Feb 13 19:49:38.951258 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:49:39.142981 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:49:39.156896 tar[1457]: linux-amd64/README.md Feb 13 19:49:39.170394 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:49:39.309807 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:49:39.337685 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:49:39.350839 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:49:39.353211 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:58466.service - OpenSSH per-connection server daemon (10.0.0.1:58466). Feb 13 19:49:39.359454 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:49:39.359917 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:49:39.363649 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:49:39.383596 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:49:39.391022 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:49:39.393560 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:49:39.394834 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:49:39.407456 sshd[1517]: Accepted publickey for core from 10.0.0.1 port 58466 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:39.409635 sshd[1517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:39.417945 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:49:39.428930 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:49:39.432374 systemd-logind[1442]: New session 1 of user core. Feb 13 19:49:39.442835 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:49:39.447695 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:49:39.456106 (systemd)[1528]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:49:39.561895 systemd[1528]: Queued start job for default target default.target. Feb 13 19:49:39.571889 systemd[1528]: Created slice app.slice - User Application Slice. Feb 13 19:49:39.571915 systemd[1528]: Reached target paths.target - Paths. Feb 13 19:49:39.571928 systemd[1528]: Reached target timers.target - Timers. Feb 13 19:49:39.573803 systemd[1528]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:49:39.591381 systemd[1528]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:49:39.591575 systemd[1528]: Reached target sockets.target - Sockets. Feb 13 19:49:39.591599 systemd[1528]: Reached target basic.target - Basic System. Feb 13 19:49:39.591648 systemd[1528]: Reached target default.target - Main User Target. Feb 13 19:49:39.591692 systemd[1528]: Startup finished in 128ms. Feb 13 19:49:39.592207 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:49:39.595101 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:49:39.635724 systemd-networkd[1400]: eth0: Gained IPv6LL Feb 13 19:49:39.639056 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:49:39.641554 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:49:39.649867 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:49:39.653101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:39.656829 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:49:39.673820 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:55746.service - OpenSSH per-connection server daemon (10.0.0.1:55746). Feb 13 19:49:39.688483 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:49:39.688929 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:49:39.691556 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:49:39.693838 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:49:39.714065 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 55746 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:39.715882 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:39.720300 systemd-logind[1442]: New session 2 of user core. Feb 13 19:49:39.731656 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:49:39.790354 sshd[1548]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:39.809450 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:55746.service: Deactivated successfully. Feb 13 19:49:39.811290 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:49:39.812641 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:49:39.814054 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:55754.service - OpenSSH per-connection server daemon (10.0.0.1:55754). Feb 13 19:49:39.832578 systemd-logind[1442]: Removed session 2. Feb 13 19:49:39.863196 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 55754 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:39.865204 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:39.870792 systemd-logind[1442]: New session 3 of user core. Feb 13 19:49:39.884773 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:49:39.940905 sshd[1563]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:39.944502 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:55754.service: Deactivated successfully. Feb 13 19:49:39.946085 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:49:39.946677 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:49:39.947414 systemd-logind[1442]: Removed session 3. Feb 13 19:49:40.423826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:40.425564 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:49:40.426871 systemd[1]: Startup finished in 797ms (kernel) + 5.816s (initrd) + 4.349s (userspace) = 10.963s. Feb 13 19:49:40.439753 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:40.858287 kubelet[1574]: E0213 19:49:40.858229 1574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:40.862542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:40.862744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:40.863094 systemd[1]: kubelet.service: Consumed 1.028s CPU time. Feb 13 19:49:49.952568 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:35722.service - OpenSSH per-connection server daemon (10.0.0.1:35722). Feb 13 19:49:49.987914 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 35722 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:49.989451 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:49.993440 systemd-logind[1442]: New session 4 of user core. Feb 13 19:49:50.003679 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:49:50.058820 sshd[1587]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:50.070897 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:35722.service: Deactivated successfully. Feb 13 19:49:50.073392 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:49:50.075336 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:49:50.088984 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:35732.service - OpenSSH per-connection server daemon (10.0.0.1:35732). Feb 13 19:49:50.090004 systemd-logind[1442]: Removed session 4. Feb 13 19:49:50.120633 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 35732 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:50.122212 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:50.125941 systemd-logind[1442]: New session 5 of user core. Feb 13 19:49:50.139640 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:49:50.189284 sshd[1594]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:50.198262 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:35732.service: Deactivated successfully. Feb 13 19:49:50.200090 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:49:50.201732 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:49:50.213811 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:35746.service - OpenSSH per-connection server daemon (10.0.0.1:35746). Feb 13 19:49:50.214996 systemd-logind[1442]: Removed session 5. Feb 13 19:49:50.245908 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 35746 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:50.247375 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:50.251002 systemd-logind[1442]: New session 6 of user core. Feb 13 19:49:50.257652 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:49:50.311349 sshd[1601]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:50.317972 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:35746.service: Deactivated successfully. Feb 13 19:49:50.319407 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:49:50.320937 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:49:50.329743 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). Feb 13 19:49:50.330586 systemd-logind[1442]: Removed session 6. Feb 13 19:49:50.360678 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:50.362007 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:50.366068 systemd-logind[1442]: New session 7 of user core. Feb 13 19:49:50.376640 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:49:50.433186 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:49:50.433537 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:50.448770 sudo[1611]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:50.450955 sshd[1608]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:50.461364 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:35750.service: Deactivated successfully. Feb 13 19:49:50.463221 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:49:50.464951 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:49:50.473835 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:35752.service - OpenSSH per-connection server daemon (10.0.0.1:35752). Feb 13 19:49:50.474825 systemd-logind[1442]: Removed session 7. Feb 13 19:49:50.505394 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 35752 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:50.507039 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:50.511119 systemd-logind[1442]: New session 8 of user core. Feb 13 19:49:50.524641 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:49:50.578188 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:49:50.578558 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:50.582106 sudo[1620]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:50.587922 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:49:50.588248 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:50.605739 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:50.607623 auditctl[1623]: No rules Feb 13 19:49:50.608900 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:49:50.609131 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:50.610839 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:50.643809 augenrules[1641]: No rules Feb 13 19:49:50.645565 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:50.646716 sudo[1619]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:50.648673 sshd[1616]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:50.658165 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:35752.service: Deactivated successfully. Feb 13 19:49:50.659849 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:49:50.661435 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:49:50.671772 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:35758.service - OpenSSH per-connection server daemon (10.0.0.1:35758). Feb 13 19:49:50.672638 systemd-logind[1442]: Removed session 8. Feb 13 19:49:50.703534 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 35758 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:49:50.704981 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:50.708830 systemd-logind[1442]: New session 9 of user core. Feb 13 19:49:50.719645 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:49:50.773196 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:49:50.773560 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:51.020732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:49:51.027710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:51.063740 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:49:51.064205 (dockerd)[1675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:49:51.189436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:51.194715 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:51.232703 kubelet[1686]: E0213 19:49:51.232658 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:51.240098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:51.240327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:51.349237 dockerd[1675]: time="2025-02-13T19:49:51.349085954Z" level=info msg="Starting up" Feb 13 19:49:51.760316 dockerd[1675]: time="2025-02-13T19:49:51.760179404Z" level=info msg="Loading containers: start." Feb 13 19:49:51.865552 kernel: Initializing XFRM netlink socket Feb 13 19:49:51.942157 systemd-networkd[1400]: docker0: Link UP Feb 13 19:49:51.964317 dockerd[1675]: time="2025-02-13T19:49:51.964255777Z" level=info msg="Loading containers: done." Feb 13 19:49:51.979156 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4158120117-merged.mount: Deactivated successfully. Feb 13 19:49:51.981426 dockerd[1675]: time="2025-02-13T19:49:51.981357543Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:49:51.981540 dockerd[1675]: time="2025-02-13T19:49:51.981498377Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:49:51.981715 dockerd[1675]: time="2025-02-13T19:49:51.981681471Z" level=info msg="Daemon has completed initialization" Feb 13 19:49:52.023096 dockerd[1675]: time="2025-02-13T19:49:52.022997057Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:49:52.023310 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:49:52.570437 containerd[1460]: time="2025-02-13T19:49:52.570391642Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:49:53.173901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776489937.mount: Deactivated successfully. Feb 13 19:49:54.016186 containerd[1460]: time="2025-02-13T19:49:54.016118439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:54.017089 containerd[1460]: time="2025-02-13T19:49:54.017020260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:49:54.018202 containerd[1460]: time="2025-02-13T19:49:54.018173142Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:54.020984 containerd[1460]: time="2025-02-13T19:49:54.020956391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:54.022014 containerd[1460]: time="2025-02-13T19:49:54.021965463Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 1.451528095s" Feb 13 19:49:54.022014 containerd[1460]: time="2025-02-13T19:49:54.022005357Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:49:54.022606 containerd[1460]: time="2025-02-13T19:49:54.022578462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:49:55.320033 containerd[1460]: time="2025-02-13T19:49:55.319959150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:55.320912 containerd[1460]: time="2025-02-13T19:49:55.320837197Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:49:55.322321 containerd[1460]: time="2025-02-13T19:49:55.322277157Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:55.325119 containerd[1460]: time="2025-02-13T19:49:55.325072057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:55.325997 containerd[1460]: time="2025-02-13T19:49:55.325946737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.303337257s" Feb 13 19:49:55.325997 containerd[1460]: time="2025-02-13T19:49:55.325985009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:49:55.326431 containerd[1460]: time="2025-02-13T19:49:55.326409224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:49:56.568164 containerd[1460]: time="2025-02-13T19:49:56.568095536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:56.569045 containerd[1460]: time="2025-02-13T19:49:56.568997267Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:49:56.570473 containerd[1460]: time="2025-02-13T19:49:56.570420625Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:56.573535 containerd[1460]: time="2025-02-13T19:49:56.573491974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:56.574808 containerd[1460]: time="2025-02-13T19:49:56.574755593Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.248313568s" Feb 13 19:49:56.574808 containerd[1460]: time="2025-02-13T19:49:56.574802692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:49:56.575425 containerd[1460]: time="2025-02-13T19:49:56.575307067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:49:57.653898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292130687.mount: Deactivated successfully. Feb 13 19:49:58.323088 containerd[1460]: time="2025-02-13T19:49:58.322993659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:58.325154 containerd[1460]: time="2025-02-13T19:49:58.325076064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:49:58.326660 containerd[1460]: time="2025-02-13T19:49:58.326604810Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:58.329016 containerd[1460]: time="2025-02-13T19:49:58.328974784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:58.330024 containerd[1460]: time="2025-02-13T19:49:58.329961935Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.754617177s" Feb 13 19:49:58.330024 containerd[1460]: time="2025-02-13T19:49:58.330019062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:49:58.330615 containerd[1460]: time="2025-02-13T19:49:58.330579302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:49:58.861797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447019128.mount: Deactivated successfully. Feb 13 19:49:59.992094 containerd[1460]: time="2025-02-13T19:49:59.992016386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:59.992954 containerd[1460]: time="2025-02-13T19:49:59.992850350Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:49:59.994635 containerd[1460]: time="2025-02-13T19:49:59.994604870Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:59.999931 containerd[1460]: time="2025-02-13T19:49:59.999899107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:00.002611 containerd[1460]: time="2025-02-13T19:50:00.002568492Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.671954243s" Feb 13 19:50:00.002611 containerd[1460]: time="2025-02-13T19:50:00.002607074Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:50:00.003320 containerd[1460]: time="2025-02-13T19:50:00.003277130Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:50:00.540835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551795652.mount: Deactivated successfully. Feb 13 19:50:00.547337 containerd[1460]: time="2025-02-13T19:50:00.547263154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:00.548128 containerd[1460]: time="2025-02-13T19:50:00.548066781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:50:00.549405 containerd[1460]: time="2025-02-13T19:50:00.549368482Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:00.551598 containerd[1460]: time="2025-02-13T19:50:00.551563116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:00.552292 containerd[1460]: time="2025-02-13T19:50:00.552267036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 548.961873ms" Feb 13 19:50:00.552337 containerd[1460]: time="2025-02-13T19:50:00.552295289Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:50:00.552858 containerd[1460]: time="2025-02-13T19:50:00.552826535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:50:01.270912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:50:01.285761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:01.462735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:01.468892 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:50:02.042717 kubelet[1971]: E0213 19:50:02.042595 1971 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:50:02.046737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:50:02.046966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:50:02.523914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332123894.mount: Deactivated successfully. Feb 13 19:50:05.441813 containerd[1460]: time="2025-02-13T19:50:05.441738584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:05.443043 containerd[1460]: time="2025-02-13T19:50:05.442963661Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:50:05.445894 containerd[1460]: time="2025-02-13T19:50:05.445843210Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:05.449098 containerd[1460]: time="2025-02-13T19:50:05.449057958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:05.450077 containerd[1460]: time="2025-02-13T19:50:05.450034729Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.897179541s" Feb 13 19:50:05.450077 containerd[1460]: time="2025-02-13T19:50:05.450066639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:50:07.254351 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:07.265730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:07.287442 systemd[1]: Reloading requested from client PID 2063 ('systemctl') (unit session-9.scope)... Feb 13 19:50:07.287457 systemd[1]: Reloading... Feb 13 19:50:07.360594 zram_generator::config[2102]: No configuration found. Feb 13 19:50:07.507789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:07.586925 systemd[1]: Reloading finished in 298 ms. Feb 13 19:50:07.660650 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:50:07.660753 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:50:07.661080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:07.664274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:07.828117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:07.834318 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:50:07.882755 kubelet[2151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:50:07.882755 kubelet[2151]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:50:07.882755 kubelet[2151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:50:07.883306 kubelet[2151]: I0213 19:50:07.882819 2151 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:50:08.297022 kubelet[2151]: I0213 19:50:08.296960 2151 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:50:08.297022 kubelet[2151]: I0213 19:50:08.297000 2151 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:50:08.297358 kubelet[2151]: I0213 19:50:08.297330 2151 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:50:08.331076 kubelet[2151]: E0213 19:50:08.331020 2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:08.331795 kubelet[2151]: I0213 19:50:08.331765 2151 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:50:08.340637 kubelet[2151]: E0213 19:50:08.340585 2151 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:50:08.340637 kubelet[2151]: I0213 19:50:08.340628 2151 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:50:08.345936 kubelet[2151]: I0213 19:50:08.345902 2151 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:50:08.347532 kubelet[2151]: I0213 19:50:08.347467 2151 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:50:08.347740 kubelet[2151]: I0213 19:50:08.347510 2151 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:50:08.347869 kubelet[2151]: I0213 19:50:08.347739 2151 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:50:08.347869 kubelet[2151]: I0213 19:50:08.347754 2151 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:50:08.347970 kubelet[2151]: I0213 19:50:08.347943 2151 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:50:08.351716 kubelet[2151]: I0213 19:50:08.351682 2151 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:50:08.351716 kubelet[2151]: I0213 19:50:08.351713 2151 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:50:08.351818 kubelet[2151]: I0213 19:50:08.351744 2151 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:50:08.351818 kubelet[2151]: I0213 19:50:08.351758 2151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:50:08.358151 kubelet[2151]: I0213 19:50:08.357398 2151 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:50:08.358151 kubelet[2151]: W0213 19:50:08.357940 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Feb 13 19:50:08.358151 kubelet[2151]: I0213 19:50:08.357995 2151 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:50:08.358151 kubelet[2151]: E0213 19:50:08.358003 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:08.359662 kubelet[2151]: W0213 19:50:08.359259 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Feb 13 19:50:08.359662 kubelet[2151]: E0213 19:50:08.359297 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:08.360217 kubelet[2151]: W0213 19:50:08.360199 2151 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:50:08.363564 kubelet[2151]: I0213 19:50:08.363534 2151 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:50:08.363635 kubelet[2151]: I0213 19:50:08.363572 2151 server.go:1287] "Started kubelet" Feb 13 19:50:08.365653 kubelet[2151]: I0213 19:50:08.364012 2151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:50:08.365653 kubelet[2151]: I0213 19:50:08.364468 2151 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:50:08.365653 kubelet[2151]: I0213 19:50:08.364552 2151 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:50:08.365653 kubelet[2151]: I0213 19:50:08.364967 2151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:50:08.365653 kubelet[2151]: I0213 19:50:08.365568 2151 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:50:08.366745 kubelet[2151]: I0213 19:50:08.366580 2151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:50:08.368582 kubelet[2151]: E0213 19:50:08.368563 2151 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:50:08.368887 kubelet[2151]: I0213 19:50:08.368867 2151 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:50:08.369050 kubelet[2151]: I0213 19:50:08.369030 2151 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:50:08.369050 kubelet[2151]: E0213 19:50:08.368867 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:50:08.369117 kubelet[2151]: I0213 19:50:08.369107 2151 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:50:08.369318 kubelet[2151]: E0213 19:50:08.369207 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Feb 13 19:50:08.369542 kubelet[2151]: W0213 19:50:08.369417 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Feb 13 19:50:08.369542 kubelet[2151]: E0213 19:50:08.369465 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:08.369542 kubelet[2151]: I0213 19:50:08.369533 2151 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:50:08.369642 kubelet[2151]: I0213 19:50:08.369623 2151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:50:08.369813 kubelet[2151]: E0213 19:50:08.368374 2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dc6536c0ba3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:50:08.363551292 +0000 UTC m=+0.525008662,LastTimestamp:2025-02-13 19:50:08.363551292 +0000 UTC m=+0.525008662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:50:08.370647 kubelet[2151]: I0213 19:50:08.370630 2151 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:50:08.386762 kubelet[2151]: I0213 19:50:08.386558 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:50:08.387883 kubelet[2151]: I0213 19:50:08.387846 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:50:08.387883 kubelet[2151]: I0213 19:50:08.387873 2151 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:50:08.387964 kubelet[2151]: I0213 19:50:08.387899 2151 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:50:08.387964 kubelet[2151]: I0213 19:50:08.387910 2151 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:50:08.388038 kubelet[2151]: E0213 19:50:08.387963 2151 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:50:08.390503 kubelet[2151]: I0213 19:50:08.390440 2151 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:50:08.390503 kubelet[2151]: I0213 19:50:08.390458 2151 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:50:08.390503 kubelet[2151]: I0213 19:50:08.390476 2151 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:50:08.391138 kubelet[2151]: W0213 19:50:08.391086 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Feb 13 19:50:08.391190 kubelet[2151]: E0213 19:50:08.391148 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:08.469308 kubelet[2151]: E0213 19:50:08.469255 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:50:08.488421 kubelet[2151]: E0213 19:50:08.488350 2151 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:50:08.569909 kubelet[2151]: E0213 19:50:08.569705 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:50:08.570371 kubelet[2151]: E0213 19:50:08.570324 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Feb 13 19:50:08.638209 kubelet[2151]: I0213 19:50:08.638115 2151 policy_none.go:49] "None policy: Start" Feb 13 19:50:08.638209 kubelet[2151]: I0213 19:50:08.638160 2151 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:50:08.638209 kubelet[2151]: I0213 19:50:08.638182 2151 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:50:08.647090 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:50:08.663114 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:50:08.666668 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:50:08.672201 kubelet[2151]: E0213 19:50:08.670062 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:50:08.673675 kubelet[2151]: I0213 19:50:08.673644 2151 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:50:08.673901 kubelet[2151]: I0213 19:50:08.673880 2151 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:50:08.674347 kubelet[2151]: I0213 19:50:08.673895 2151 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:50:08.674347 kubelet[2151]: I0213 19:50:08.674110 2151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:50:08.675091 kubelet[2151]: E0213 19:50:08.675069 2151 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:50:08.675133 kubelet[2151]: E0213 19:50:08.675107 2151 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:50:08.697450 systemd[1]: Created slice kubepods-burstable-podf76944d20adca35b7b3b850e4c76d363.slice - libcontainer container kubepods-burstable-podf76944d20adca35b7b3b850e4c76d363.slice. Feb 13 19:50:08.708927 kubelet[2151]: E0213 19:50:08.708878 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:50:08.711342 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:50:08.719023 kubelet[2151]: E0213 19:50:08.718972 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:50:08.721774 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:50:08.723472 kubelet[2151]: E0213 19:50:08.723439 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:50:08.770842 kubelet[2151]: I0213 19:50:08.770788 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:08.770842 kubelet[2151]: I0213 19:50:08.770837 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:08.771005 kubelet[2151]: I0213 19:50:08.770858 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f76944d20adca35b7b3b850e4c76d363-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f76944d20adca35b7b3b850e4c76d363\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:08.771005 kubelet[2151]: I0213 19:50:08.770872 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:08.771005 kubelet[2151]: I0213 19:50:08.770890 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:08.771005 kubelet[2151]: I0213 19:50:08.770903 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:08.771005 kubelet[2151]: I0213 19:50:08.770922 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:08.771119 kubelet[2151]: I0213 19:50:08.770936 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f76944d20adca35b7b3b850e4c76d363-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f76944d20adca35b7b3b850e4c76d363\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:08.771119 kubelet[2151]: I0213 19:50:08.770951 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f76944d20adca35b7b3b850e4c76d363-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f76944d20adca35b7b3b850e4c76d363\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:08.775998 kubelet[2151]: I0213 19:50:08.775979 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:50:08.776466 kubelet[2151]: E0213 19:50:08.776414 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Feb 13 19:50:08.971694 kubelet[2151]: E0213 19:50:08.971501 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Feb 13 19:50:08.977821 kubelet[2151]: I0213 19:50:08.977780 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:50:08.978234 kubelet[2151]: E0213 19:50:08.978203 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Feb 13 19:50:09.009543 kubelet[2151]: E0213 19:50:09.009477 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:09.010277 containerd[1460]: time="2025-02-13T19:50:09.010235545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f76944d20adca35b7b3b850e4c76d363,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:09.019508 kubelet[2151]: E0213 19:50:09.019473 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:09.020026 containerd[1460]: time="2025-02-13T19:50:09.019981509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:09.024263 kubelet[2151]: E0213 19:50:09.024239 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:09.024654 containerd[1460]: time="2025-02-13T19:50:09.024617351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:09.380169 kubelet[2151]: I0213 19:50:09.380121 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:50:09.380638 kubelet[2151]: E0213 19:50:09.380586 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Feb 13 19:50:09.495848 kubelet[2151]: W0213 19:50:09.495764 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Feb 13 19:50:09.495848 kubelet[2151]: E0213 19:50:09.495846 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:09.707554 kubelet[2151]: W0213 19:50:09.707352 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Feb 13 19:50:09.707554 kubelet[2151]: E0213 19:50:09.707440 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:09.738953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119391515.mount: Deactivated successfully. Feb 13 19:50:09.745766 containerd[1460]: time="2025-02-13T19:50:09.745689822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:09.746830 containerd[1460]: time="2025-02-13T19:50:09.746775718Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:09.747938 containerd[1460]: time="2025-02-13T19:50:09.747854661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:50:09.748967 containerd[1460]: time="2025-02-13T19:50:09.748909649Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:09.749944 containerd[1460]: time="2025-02-13T19:50:09.749906057Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:50:09.750730 containerd[1460]: time="2025-02-13T19:50:09.750690107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:50:09.751772 containerd[1460]: time="2025-02-13T19:50:09.751730518Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:09.755502 containerd[1460]: time="2025-02-13T19:50:09.755463788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:09.756380 containerd[1460]: time="2025-02-13T19:50:09.756344168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 736.287098ms" Feb 13 19:50:09.757847 kubelet[2151]: W0213 19:50:09.757760 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Feb 13 19:50:09.758363 kubelet[2151]: E0213 19:50:09.757999 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:09.758437 containerd[1460]: time="2025-02-13T19:50:09.758065646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 747.747316ms" Feb 13 19:50:09.759418 containerd[1460]: time="2025-02-13T19:50:09.759373659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 734.680275ms" Feb 13 19:50:09.773097 kubelet[2151]: E0213 19:50:09.773038 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Feb 13 19:50:09.933386 containerd[1460]: time="2025-02-13T19:50:09.933211895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:09.933386 containerd[1460]: time="2025-02-13T19:50:09.933286415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:09.933386 containerd[1460]: time="2025-02-13T19:50:09.933307745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:09.933744 containerd[1460]: time="2025-02-13T19:50:09.933154368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:09.933744 containerd[1460]: time="2025-02-13T19:50:09.933229919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:09.933744 containerd[1460]: time="2025-02-13T19:50:09.933250237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:09.933744 containerd[1460]: time="2025-02-13T19:50:09.933434613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:09.934842 containerd[1460]: time="2025-02-13T19:50:09.934175492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:09.935660 containerd[1460]: time="2025-02-13T19:50:09.933415136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:09.935660 containerd[1460]: time="2025-02-13T19:50:09.934976284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:09.935660 containerd[1460]: time="2025-02-13T19:50:09.934991642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:09.935660 containerd[1460]: time="2025-02-13T19:50:09.935076662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:09.965211 kubelet[2151]: W0213 19:50:09.965048 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Feb 13 19:50:09.965211 kubelet[2151]: E0213 19:50:09.965136 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:50:09.968789 systemd[1]: Started cri-containerd-200d7b3b431f1ef2290b7dcca638f835e74eb720ff3c086817e6531c3b835f7a.scope - libcontainer container 200d7b3b431f1ef2290b7dcca638f835e74eb720ff3c086817e6531c3b835f7a. Feb 13 19:50:09.970778 systemd[1]: Started cri-containerd-23c463057a770d489148b1cee702252cfee71b246c2443db176a576c01297523.scope - libcontainer container 23c463057a770d489148b1cee702252cfee71b246c2443db176a576c01297523. Feb 13 19:50:09.975777 systemd[1]: Started cri-containerd-8a49b863d53827a0fc7592b2756258aecf8ec71149bd9e6c90fb139a0960a185.scope - libcontainer container 8a49b863d53827a0fc7592b2756258aecf8ec71149bd9e6c90fb139a0960a185. Feb 13 19:50:10.016161 containerd[1460]: time="2025-02-13T19:50:10.016106232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"200d7b3b431f1ef2290b7dcca638f835e74eb720ff3c086817e6531c3b835f7a\"" Feb 13 19:50:10.019062 kubelet[2151]: E0213 19:50:10.018991 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:10.022774 containerd[1460]: time="2025-02-13T19:50:10.022735162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f76944d20adca35b7b3b850e4c76d363,Namespace:kube-system,Attempt:0,} returns sandbox id \"23c463057a770d489148b1cee702252cfee71b246c2443db176a576c01297523\"" Feb 13 19:50:10.022836 containerd[1460]: time="2025-02-13T19:50:10.022766570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a49b863d53827a0fc7592b2756258aecf8ec71149bd9e6c90fb139a0960a185\"" Feb 13 19:50:10.023805 kubelet[2151]: E0213 19:50:10.023779 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:10.023893 kubelet[2151]: E0213 19:50:10.023764 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:10.025419 containerd[1460]: time="2025-02-13T19:50:10.025388817Z" level=info msg="CreateContainer within sandbox \"23c463057a770d489148b1cee702252cfee71b246c2443db176a576c01297523\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:50:10.025458 containerd[1460]: time="2025-02-13T19:50:10.025441215Z" level=info msg="CreateContainer within sandbox \"8a49b863d53827a0fc7592b2756258aecf8ec71149bd9e6c90fb139a0960a185\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:50:10.026544 containerd[1460]: time="2025-02-13T19:50:10.026510631Z" level=info msg="CreateContainer within sandbox \"200d7b3b431f1ef2290b7dcca638f835e74eb720ff3c086817e6531c3b835f7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:50:10.051547 containerd[1460]: time="2025-02-13T19:50:10.051466974Z" level=info msg="CreateContainer within sandbox \"23c463057a770d489148b1cee702252cfee71b246c2443db176a576c01297523\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3bb6d63d25cb9e413d3c313a496d109e7dc09bc66321f5024cf61868f6a90519\"" Feb 13 19:50:10.052445 containerd[1460]: time="2025-02-13T19:50:10.052384885Z" level=info msg="StartContainer for \"3bb6d63d25cb9e413d3c313a496d109e7dc09bc66321f5024cf61868f6a90519\"" Feb 13 19:50:10.058534 containerd[1460]: time="2025-02-13T19:50:10.058462781Z" level=info msg="CreateContainer within sandbox \"8a49b863d53827a0fc7592b2756258aecf8ec71149bd9e6c90fb139a0960a185\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fac76a73ceb18b7e462651945ec38c591f74fc63546f2b683b2a7b67cecb5f14\"" Feb 13 19:50:10.059198 containerd[1460]: time="2025-02-13T19:50:10.059149889Z" level=info msg="StartContainer for \"fac76a73ceb18b7e462651945ec38c591f74fc63546f2b683b2a7b67cecb5f14\"" Feb 13 19:50:10.060935 containerd[1460]: time="2025-02-13T19:50:10.060901964Z" level=info msg="CreateContainer within sandbox \"200d7b3b431f1ef2290b7dcca638f835e74eb720ff3c086817e6531c3b835f7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef4626a19e467f85fa154362aef6499c36fab3aabba66e38f3e03fedfc31f73b\"" Feb 13 19:50:10.061199 containerd[1460]: time="2025-02-13T19:50:10.061179384Z" level=info msg="StartContainer for \"ef4626a19e467f85fa154362aef6499c36fab3aabba66e38f3e03fedfc31f73b\"" Feb 13 19:50:10.084680 systemd[1]: Started cri-containerd-3bb6d63d25cb9e413d3c313a496d109e7dc09bc66321f5024cf61868f6a90519.scope - libcontainer container 3bb6d63d25cb9e413d3c313a496d109e7dc09bc66321f5024cf61868f6a90519. Feb 13 19:50:10.088429 systemd[1]: Started cri-containerd-fac76a73ceb18b7e462651945ec38c591f74fc63546f2b683b2a7b67cecb5f14.scope - libcontainer container fac76a73ceb18b7e462651945ec38c591f74fc63546f2b683b2a7b67cecb5f14. Feb 13 19:50:10.093206 systemd[1]: Started cri-containerd-ef4626a19e467f85fa154362aef6499c36fab3aabba66e38f3e03fedfc31f73b.scope - libcontainer container ef4626a19e467f85fa154362aef6499c36fab3aabba66e38f3e03fedfc31f73b. Feb 13 19:50:10.145604 containerd[1460]: time="2025-02-13T19:50:10.144988997Z" level=info msg="StartContainer for \"ef4626a19e467f85fa154362aef6499c36fab3aabba66e38f3e03fedfc31f73b\" returns successfully" Feb 13 19:50:10.145604 containerd[1460]: time="2025-02-13T19:50:10.145109163Z" level=info msg="StartContainer for \"3bb6d63d25cb9e413d3c313a496d109e7dc09bc66321f5024cf61868f6a90519\" returns successfully" Feb 13 19:50:10.182159 kubelet[2151]: I0213 19:50:10.182091 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:50:10.182576 kubelet[2151]: E0213 19:50:10.182505 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Feb 13 19:50:10.183222 containerd[1460]: time="2025-02-13T19:50:10.183068857Z" level=info msg="StartContainer for \"fac76a73ceb18b7e462651945ec38c591f74fc63546f2b683b2a7b67cecb5f14\" returns successfully" Feb 13 19:50:10.399635 kubelet[2151]: E0213 19:50:10.399381 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:50:10.399635 kubelet[2151]: E0213 19:50:10.399549 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:10.399918 kubelet[2151]: E0213 19:50:10.399889 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:50:10.400025 kubelet[2151]: E0213 19:50:10.400004 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:10.400166 kubelet[2151]: E0213 19:50:10.400151 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:50:10.400337 kubelet[2151]: E0213 19:50:10.400303 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:11.403124 kubelet[2151]: E0213 19:50:11.402893 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:50:11.403124 kubelet[2151]: E0213 19:50:11.403037 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:11.404057 kubelet[2151]: E0213 19:50:11.403900 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:50:11.404057 kubelet[2151]: E0213 19:50:11.404008 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:11.784785 kubelet[2151]: I0213 19:50:11.784696 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:50:11.840068 kubelet[2151]: E0213 19:50:11.840002 2151 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:50:11.945393 kubelet[2151]: I0213 19:50:11.945310 2151 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:50:11.945393 kubelet[2151]: E0213 19:50:11.945354 2151 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:50:11.955868 kubelet[2151]: E0213 19:50:11.955837 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:50:11.978959 kubelet[2151]: E0213 19:50:11.978858 2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dc6536c0ba3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:50:08.363551292 +0000 UTC m=+0.525008662,LastTimestamp:2025-02-13 19:50:08.363551292 +0000 UTC m=+0.525008662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:50:12.031407 kubelet[2151]: E0213 19:50:12.031286 2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dc65370d083f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:50:08.368551999 +0000 UTC m=+0.530009389,LastTimestamp:2025-02-13 19:50:08.368551999 +0000 UTC m=+0.530009389,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:50:12.056588 kubelet[2151]: E0213 19:50:12.056447 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:50:12.085152 kubelet[2151]: E0213 19:50:12.085045 2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dc6538506aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:50:08.389745316 +0000 UTC m=+0.551202686,LastTimestamp:2025-02-13 19:50:08.389745316 +0000 UTC m=+0.551202686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:50:12.169829 kubelet[2151]: I0213 19:50:12.169773 2151 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:12.174430 kubelet[2151]: E0213 19:50:12.174378 2151 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:12.174430 kubelet[2151]: I0213 19:50:12.174412 2151 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:12.175976 kubelet[2151]: E0213 19:50:12.175946 2151 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:12.175976 kubelet[2151]: I0213 19:50:12.175975 2151 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:12.177539 kubelet[2151]: E0213 19:50:12.177489 2151 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:12.361250 kubelet[2151]: I0213 19:50:12.360828 2151 apiserver.go:52] "Watching apiserver" Feb 13 19:50:12.369484 kubelet[2151]: I0213 19:50:12.369435 2151 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:50:12.402738 kubelet[2151]: I0213 19:50:12.402698 2151 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:12.404637 kubelet[2151]: E0213 19:50:12.404605 2151 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:12.405013 kubelet[2151]: E0213 19:50:12.404808 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:13.778587 systemd[1]: Reloading requested from client PID 2428 ('systemctl') (unit session-9.scope)... Feb 13 19:50:13.778607 systemd[1]: Reloading... Feb 13 19:50:13.948594 zram_generator::config[2470]: No configuration found. Feb 13 19:50:14.141403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:14.240249 systemd[1]: Reloading finished in 461 ms. Feb 13 19:50:14.285964 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:14.310570 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:50:14.310937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:14.311013 systemd[1]: kubelet.service: Consumed 1.168s CPU time, 128.0M memory peak, 0B memory swap peak. Feb 13 19:50:14.319207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:14.498585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:14.509976 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:50:14.569847 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:50:14.569847 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:50:14.569847 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:50:14.570267 kubelet[2512]: I0213 19:50:14.569899 2512 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:50:14.576101 kubelet[2512]: I0213 19:50:14.576069 2512 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:50:14.576101 kubelet[2512]: I0213 19:50:14.576090 2512 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:50:14.576350 kubelet[2512]: I0213 19:50:14.576326 2512 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:50:14.577417 kubelet[2512]: I0213 19:50:14.577393 2512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:50:14.580756 kubelet[2512]: I0213 19:50:14.580723 2512 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:50:14.585551 kubelet[2512]: E0213 19:50:14.585498 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:50:14.585551 kubelet[2512]: I0213 19:50:14.585549 2512 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:50:14.590002 kubelet[2512]: I0213 19:50:14.589981 2512 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:50:14.590241 kubelet[2512]: I0213 19:50:14.590206 2512 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:50:14.590384 kubelet[2512]: I0213 19:50:14.590234 2512 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:50:14.590459 kubelet[2512]: I0213 19:50:14.590384 2512 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:50:14.590459 kubelet[2512]: I0213 19:50:14.590393 2512 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:50:14.590459 kubelet[2512]: I0213 19:50:14.590431 2512 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:50:14.590625 kubelet[2512]: I0213 19:50:14.590603 2512 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:50:14.590676 kubelet[2512]: I0213 19:50:14.590658 2512 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:50:14.590708 kubelet[2512]: I0213 19:50:14.590678 2512 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:50:14.590708 kubelet[2512]: I0213 19:50:14.590689 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:50:14.591379 kubelet[2512]: I0213 19:50:14.591354 2512 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:50:14.591809 kubelet[2512]: I0213 19:50:14.591783 2512 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:50:14.592407 kubelet[2512]: I0213 19:50:14.592386 2512 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:50:14.592450 kubelet[2512]: I0213 19:50:14.592423 2512 server.go:1287] "Started kubelet" Feb 13 19:50:14.592543 kubelet[2512]: I0213 19:50:14.592512 2512 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:50:14.592751 kubelet[2512]: I0213 19:50:14.592703 2512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:50:14.593415 kubelet[2512]: I0213 19:50:14.593390 2512 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:50:14.596116 kubelet[2512]: I0213 19:50:14.596067 2512 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:50:14.598281 kubelet[2512]: E0213 19:50:14.597719 2512 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:50:14.600677 kubelet[2512]: I0213 19:50:14.600661 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:50:14.601256 kubelet[2512]: I0213 19:50:14.601230 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:50:14.602275 kubelet[2512]: I0213 19:50:14.602164 2512 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:50:14.602349 kubelet[2512]: E0213 19:50:14.602332 2512 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:50:14.602770 kubelet[2512]: I0213 19:50:14.602752 2512 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:50:14.602914 kubelet[2512]: I0213 19:50:14.602900 2512 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:50:14.604306 kubelet[2512]: I0213 19:50:14.604282 2512 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:50:14.604416 kubelet[2512]: I0213 19:50:14.604393 2512 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:50:14.609371 kubelet[2512]: I0213 19:50:14.609345 2512 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:50:14.616600 kubelet[2512]: I0213 19:50:14.616560 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:50:14.617728 kubelet[2512]: I0213 19:50:14.617705 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:50:14.617728 kubelet[2512]: I0213 19:50:14.617724 2512 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:50:14.617806 kubelet[2512]: I0213 19:50:14.617743 2512 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:50:14.617806 kubelet[2512]: I0213 19:50:14.617750 2512 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:50:14.617806 kubelet[2512]: E0213 19:50:14.617794 2512 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:50:14.641321 kubelet[2512]: I0213 19:50:14.641296 2512 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:50:14.641321 kubelet[2512]: I0213 19:50:14.641312 2512 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:50:14.641321 kubelet[2512]: I0213 19:50:14.641330 2512 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:50:14.641583 kubelet[2512]: I0213 19:50:14.641472 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:50:14.641583 kubelet[2512]: I0213 19:50:14.641483 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:50:14.641583 kubelet[2512]: I0213 19:50:14.641501 2512 policy_none.go:49] "None policy: Start" Feb 13 19:50:14.641583 kubelet[2512]: I0213 19:50:14.641509 2512 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:50:14.641583 kubelet[2512]: I0213 19:50:14.641541 2512 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:50:14.641695 kubelet[2512]: I0213 19:50:14.641634 2512 state_mem.go:75] "Updated machine memory state" Feb 13 19:50:14.645496 kubelet[2512]: I0213 19:50:14.645475 2512 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:50:14.645851 kubelet[2512]: I0213 19:50:14.645648 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:50:14.645851 kubelet[2512]: I0213 19:50:14.645666 2512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:50:14.645921 kubelet[2512]: I0213 19:50:14.645900 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:50:14.646380 kubelet[2512]: E0213 19:50:14.646353 2512 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:50:14.719142 kubelet[2512]: I0213 19:50:14.719036 2512 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:14.719142 kubelet[2512]: I0213 19:50:14.719040 2512 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:14.719339 kubelet[2512]: I0213 19:50:14.719272 2512 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:14.751668 kubelet[2512]: I0213 19:50:14.751541 2512 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:50:14.758780 kubelet[2512]: I0213 19:50:14.758731 2512 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:50:14.758921 kubelet[2512]: I0213 19:50:14.758817 2512 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:50:14.803504 kubelet[2512]: I0213 19:50:14.803436 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f76944d20adca35b7b3b850e4c76d363-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f76944d20adca35b7b3b850e4c76d363\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:14.904653 kubelet[2512]: I0213 19:50:14.904611 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:14.904653 kubelet[2512]: I0213 19:50:14.904657 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f76944d20adca35b7b3b850e4c76d363-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f76944d20adca35b7b3b850e4c76d363\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:14.904850 kubelet[2512]: I0213 19:50:14.904675 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f76944d20adca35b7b3b850e4c76d363-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f76944d20adca35b7b3b850e4c76d363\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:14.904850 kubelet[2512]: I0213 19:50:14.904695 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:14.904850 kubelet[2512]: I0213 19:50:14.904714 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:14.904850 kubelet[2512]: I0213 19:50:14.904728 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:14.904850 kubelet[2512]: I0213 19:50:14.904811 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:14.904962 kubelet[2512]: I0213 19:50:14.904843 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:15.025285 kubelet[2512]: E0213 19:50:15.025230 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:15.027346 kubelet[2512]: E0213 19:50:15.027309 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:15.027393 kubelet[2512]: E0213 19:50:15.027309 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:15.591903 kubelet[2512]: I0213 19:50:15.591867 2512 apiserver.go:52] "Watching apiserver" Feb 13 19:50:15.603377 kubelet[2512]: I0213 19:50:15.603335 2512 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:50:15.630233 kubelet[2512]: I0213 19:50:15.629971 2512 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:15.630233 kubelet[2512]: I0213 19:50:15.629993 2512 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:15.630233 kubelet[2512]: I0213 19:50:15.630093 2512 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:15.664128 kubelet[2512]: E0213 19:50:15.662189 2512 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:50:15.664128 kubelet[2512]: E0213 19:50:15.662381 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:15.664128 kubelet[2512]: E0213 19:50:15.663999 2512 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:50:15.664621 kubelet[2512]: E0213 19:50:15.664377 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:15.664975 kubelet[2512]: E0213 19:50:15.664939 2512 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:50:15.665701 kubelet[2512]: E0213 19:50:15.665268 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:15.728513 kubelet[2512]: I0213 19:50:15.728204 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7281782209999998 podStartE2EDuration="1.728178221s" podCreationTimestamp="2025-02-13 19:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:15.727452888 +0000 UTC m=+1.209057350" watchObservedRunningTime="2025-02-13 19:50:15.728178221 +0000 UTC m=+1.209782683" Feb 13 19:50:15.728513 kubelet[2512]: I0213 19:50:15.728371 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.728362918 podStartE2EDuration="1.728362918s" podCreationTimestamp="2025-02-13 19:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:15.662867409 +0000 UTC m=+1.144471871" watchObservedRunningTime="2025-02-13 19:50:15.728362918 +0000 UTC m=+1.209967380" Feb 13 19:50:15.775236 kubelet[2512]: I0213 19:50:15.775169 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.775148967 podStartE2EDuration="1.775148967s" podCreationTimestamp="2025-02-13 19:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:15.774504008 +0000 UTC m=+1.256108470" watchObservedRunningTime="2025-02-13 19:50:15.775148967 +0000 UTC m=+1.256753429" Feb 13 19:50:16.632326 kubelet[2512]: E0213 19:50:16.631588 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:16.632326 kubelet[2512]: E0213 19:50:16.631588 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:16.632326 kubelet[2512]: E0213 19:50:16.631917 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:17.632888 kubelet[2512]: E0213 19:50:17.632851 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:18.350860 kubelet[2512]: E0213 19:50:18.350817 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:18.669872 kubelet[2512]: I0213 19:50:18.669732 2512 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:50:18.670607 containerd[1460]: time="2025-02-13T19:50:18.670558857Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:50:18.670955 kubelet[2512]: I0213 19:50:18.670842 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:50:19.620297 kubelet[2512]: I0213 19:50:19.620236 2512 status_manager.go:890] "Failed to get status for pod" podUID="4417f6a3-01e0-49fb-9d3f-bcb689073cee" pod="kube-system/kube-proxy-9tv6m" err="pods \"kube-proxy-9tv6m\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Feb 13 19:50:19.620593 kubelet[2512]: W0213 19:50:19.620397 2512 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 13 19:50:19.620593 kubelet[2512]: E0213 19:50:19.620434 2512 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 19:50:19.627701 systemd[1]: Created slice kubepods-besteffort-pod4417f6a3_01e0_49fb_9d3f_bcb689073cee.slice - libcontainer container kubepods-besteffort-pod4417f6a3_01e0_49fb_9d3f_bcb689073cee.slice. Feb 13 19:50:19.634184 kubelet[2512]: I0213 19:50:19.634126 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4417f6a3-01e0-49fb-9d3f-bcb689073cee-lib-modules\") pod \"kube-proxy-9tv6m\" (UID: \"4417f6a3-01e0-49fb-9d3f-bcb689073cee\") " pod="kube-system/kube-proxy-9tv6m" Feb 13 19:50:19.634184 kubelet[2512]: I0213 19:50:19.634177 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4417f6a3-01e0-49fb-9d3f-bcb689073cee-kube-proxy\") pod \"kube-proxy-9tv6m\" (UID: \"4417f6a3-01e0-49fb-9d3f-bcb689073cee\") " pod="kube-system/kube-proxy-9tv6m" Feb 13 19:50:19.634184 kubelet[2512]: I0213 19:50:19.634197 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4417f6a3-01e0-49fb-9d3f-bcb689073cee-xtables-lock\") pod \"kube-proxy-9tv6m\" (UID: \"4417f6a3-01e0-49fb-9d3f-bcb689073cee\") " pod="kube-system/kube-proxy-9tv6m" Feb 13 19:50:19.634422 kubelet[2512]: I0213 19:50:19.634227 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s96x8\" (UniqueName: \"kubernetes.io/projected/4417f6a3-01e0-49fb-9d3f-bcb689073cee-kube-api-access-s96x8\") pod \"kube-proxy-9tv6m\" (UID: \"4417f6a3-01e0-49fb-9d3f-bcb689073cee\") " pod="kube-system/kube-proxy-9tv6m" Feb 13 19:50:19.700190 systemd[1]: Created slice kubepods-besteffort-pod182721b1_15ec_4c61_a67e_f5fd44b9d753.slice - libcontainer container kubepods-besteffort-pod182721b1_15ec_4c61_a67e_f5fd44b9d753.slice. Feb 13 19:50:19.734946 kubelet[2512]: I0213 19:50:19.734894 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/182721b1-15ec-4c61-a67e-f5fd44b9d753-var-lib-calico\") pod \"tigera-operator-7d68577dc5-cbpm4\" (UID: \"182721b1-15ec-4c61-a67e-f5fd44b9d753\") " pod="tigera-operator/tigera-operator-7d68577dc5-cbpm4" Feb 13 19:50:19.734946 kubelet[2512]: I0213 19:50:19.734961 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lqb\" (UniqueName: \"kubernetes.io/projected/182721b1-15ec-4c61-a67e-f5fd44b9d753-kube-api-access-86lqb\") pod \"tigera-operator-7d68577dc5-cbpm4\" (UID: \"182721b1-15ec-4c61-a67e-f5fd44b9d753\") " pod="tigera-operator/tigera-operator-7d68577dc5-cbpm4" Feb 13 19:50:19.859933 sudo[1652]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:19.862576 sshd[1649]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:19.867366 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:35758.service: Deactivated successfully. Feb 13 19:50:19.869866 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:50:19.870091 systemd[1]: session-9.scope: Consumed 4.375s CPU time, 160.1M memory peak, 0B memory swap peak. Feb 13 19:50:19.870661 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:50:19.872041 systemd-logind[1442]: Removed session 9. Feb 13 19:50:20.004842 containerd[1460]: time="2025-02-13T19:50:20.004749705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-cbpm4,Uid:182721b1-15ec-4c61-a67e-f5fd44b9d753,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:50:20.036711 containerd[1460]: time="2025-02-13T19:50:20.036593825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:20.036711 containerd[1460]: time="2025-02-13T19:50:20.036678584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:20.037055 containerd[1460]: time="2025-02-13T19:50:20.036860101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:20.037055 containerd[1460]: time="2025-02-13T19:50:20.037001352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:20.059845 systemd[1]: Started cri-containerd-277d7f6ed49fd037c1d97c81fe1c5749ba545f3729621e1bc3506c1b084a6424.scope - libcontainer container 277d7f6ed49fd037c1d97c81fe1c5749ba545f3729621e1bc3506c1b084a6424. Feb 13 19:50:20.100144 containerd[1460]: time="2025-02-13T19:50:20.100084699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-cbpm4,Uid:182721b1-15ec-4c61-a67e-f5fd44b9d753,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"277d7f6ed49fd037c1d97c81fe1c5749ba545f3729621e1bc3506c1b084a6424\"" Feb 13 19:50:20.101849 containerd[1460]: time="2025-02-13T19:50:20.101822067Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:50:20.579716 kubelet[2512]: E0213 19:50:20.579653 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:20.638726 kubelet[2512]: E0213 19:50:20.638691 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:20.735323 kubelet[2512]: E0213 19:50:20.735275 2512 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:50:20.735841 kubelet[2512]: E0213 19:50:20.735366 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4417f6a3-01e0-49fb-9d3f-bcb689073cee-kube-proxy podName:4417f6a3-01e0-49fb-9d3f-bcb689073cee nodeName:}" failed. No retries permitted until 2025-02-13 19:50:21.235346225 +0000 UTC m=+6.716950687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4417f6a3-01e0-49fb-9d3f-bcb689073cee-kube-proxy") pod "kube-proxy-9tv6m" (UID: "4417f6a3-01e0-49fb-9d3f-bcb689073cee") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:50:21.437727 kubelet[2512]: E0213 19:50:21.437663 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:21.438318 containerd[1460]: time="2025-02-13T19:50:21.438271993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9tv6m,Uid:4417f6a3-01e0-49fb-9d3f-bcb689073cee,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:21.690075 containerd[1460]: time="2025-02-13T19:50:21.689819986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:21.690075 containerd[1460]: time="2025-02-13T19:50:21.689916676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:21.690075 containerd[1460]: time="2025-02-13T19:50:21.689959417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:21.690245 containerd[1460]: time="2025-02-13T19:50:21.690102243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:21.720832 systemd[1]: Started cri-containerd-a5bf7f6d65fa996c0c3e56935678a88ddb498ab580318623d63c85dc8177329a.scope - libcontainer container a5bf7f6d65fa996c0c3e56935678a88ddb498ab580318623d63c85dc8177329a. Feb 13 19:50:21.743543 containerd[1460]: time="2025-02-13T19:50:21.743470165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9tv6m,Uid:4417f6a3-01e0-49fb-9d3f-bcb689073cee,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5bf7f6d65fa996c0c3e56935678a88ddb498ab580318623d63c85dc8177329a\"" Feb 13 19:50:21.744351 kubelet[2512]: E0213 19:50:21.744327 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:21.746469 containerd[1460]: time="2025-02-13T19:50:21.746414644Z" level=info msg="CreateContainer within sandbox \"a5bf7f6d65fa996c0c3e56935678a88ddb498ab580318623d63c85dc8177329a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:50:21.783299 containerd[1460]: time="2025-02-13T19:50:21.783212565Z" level=info msg="CreateContainer within sandbox \"a5bf7f6d65fa996c0c3e56935678a88ddb498ab580318623d63c85dc8177329a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad13d9b8695fa06c8c7d32cd7c6563b6ac8a29430f02fb1eb58bbe09a363ee48\"" Feb 13 19:50:21.784084 containerd[1460]: time="2025-02-13T19:50:21.784053155Z" level=info msg="StartContainer for \"ad13d9b8695fa06c8c7d32cd7c6563b6ac8a29430f02fb1eb58bbe09a363ee48\"" Feb 13 19:50:21.817756 systemd[1]: Started cri-containerd-ad13d9b8695fa06c8c7d32cd7c6563b6ac8a29430f02fb1eb58bbe09a363ee48.scope - libcontainer container ad13d9b8695fa06c8c7d32cd7c6563b6ac8a29430f02fb1eb58bbe09a363ee48. Feb 13 19:50:21.851646 containerd[1460]: time="2025-02-13T19:50:21.851605853Z" level=info msg="StartContainer for \"ad13d9b8695fa06c8c7d32cd7c6563b6ac8a29430f02fb1eb58bbe09a363ee48\" returns successfully" Feb 13 19:50:22.559409 containerd[1460]: time="2025-02-13T19:50:22.559315722Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:22.560206 containerd[1460]: time="2025-02-13T19:50:22.560137692Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:50:22.561393 containerd[1460]: time="2025-02-13T19:50:22.561361103Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:22.563635 containerd[1460]: time="2025-02-13T19:50:22.563594057Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:22.564333 containerd[1460]: time="2025-02-13T19:50:22.564298310Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.462442949s" Feb 13 19:50:22.564333 containerd[1460]: time="2025-02-13T19:50:22.564331315Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:50:22.566105 containerd[1460]: time="2025-02-13T19:50:22.566078248Z" level=info msg="CreateContainer within sandbox \"277d7f6ed49fd037c1d97c81fe1c5749ba545f3729621e1bc3506c1b084a6424\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:50:22.579179 containerd[1460]: time="2025-02-13T19:50:22.579125886Z" level=info msg="CreateContainer within sandbox \"277d7f6ed49fd037c1d97c81fe1c5749ba545f3729621e1bc3506c1b084a6424\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0abdc91a68439972fb5d768d372fc977346b72517434f6bff667af2b68f33a77\"" Feb 13 19:50:22.579655 containerd[1460]: time="2025-02-13T19:50:22.579621071Z" level=info msg="StartContainer for \"0abdc91a68439972fb5d768d372fc977346b72517434f6bff667af2b68f33a77\"" Feb 13 19:50:22.606665 systemd[1]: Started cri-containerd-0abdc91a68439972fb5d768d372fc977346b72517434f6bff667af2b68f33a77.scope - libcontainer container 0abdc91a68439972fb5d768d372fc977346b72517434f6bff667af2b68f33a77. Feb 13 19:50:22.774796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258412259.mount: Deactivated successfully. Feb 13 19:50:22.821340 containerd[1460]: time="2025-02-13T19:50:22.821191076Z" level=info msg="StartContainer for \"0abdc91a68439972fb5d768d372fc977346b72517434f6bff667af2b68f33a77\" returns successfully" Feb 13 19:50:22.825441 kubelet[2512]: E0213 19:50:22.825413 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:23.464493 update_engine[1446]: I20250213 19:50:23.464395 1446 update_attempter.cc:509] Updating boot flags... Feb 13 19:50:23.488604 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2897) Feb 13 19:50:23.530665 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2897) Feb 13 19:50:23.552574 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2897) Feb 13 19:50:23.837430 kubelet[2512]: I0213 19:50:23.837373 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9tv6m" podStartSLOduration=4.837350659 podStartE2EDuration="4.837350659s" podCreationTimestamp="2025-02-13 19:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:22.834006967 +0000 UTC m=+8.315611429" watchObservedRunningTime="2025-02-13 19:50:23.837350659 +0000 UTC m=+9.318955121" Feb 13 19:50:25.919563 kubelet[2512]: I0213 19:50:25.919469 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-cbpm4" podStartSLOduration=4.455826567 podStartE2EDuration="6.919445486s" podCreationTimestamp="2025-02-13 19:50:19 +0000 UTC" firstStartedPulling="2025-02-13 19:50:20.101458711 +0000 UTC m=+5.583063173" lastFinishedPulling="2025-02-13 19:50:22.56507762 +0000 UTC m=+8.046682092" observedRunningTime="2025-02-13 19:50:23.837810151 +0000 UTC m=+9.319414613" watchObservedRunningTime="2025-02-13 19:50:25.919445486 +0000 UTC m=+11.401049948" Feb 13 19:50:25.934301 systemd[1]: Created slice kubepods-besteffort-pod36d298dc_d17e_4f87_91f8_029ae72ec6ac.slice - libcontainer container kubepods-besteffort-pod36d298dc_d17e_4f87_91f8_029ae72ec6ac.slice. Feb 13 19:50:25.976193 kubelet[2512]: I0213 19:50:25.976128 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d298dc-d17e-4f87-91f8-029ae72ec6ac-tigera-ca-bundle\") pod \"calico-typha-5966d64d4-vlt9w\" (UID: \"36d298dc-d17e-4f87-91f8-029ae72ec6ac\") " pod="calico-system/calico-typha-5966d64d4-vlt9w" Feb 13 19:50:25.976193 kubelet[2512]: I0213 19:50:25.976191 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/36d298dc-d17e-4f87-91f8-029ae72ec6ac-typha-certs\") pod \"calico-typha-5966d64d4-vlt9w\" (UID: \"36d298dc-d17e-4f87-91f8-029ae72ec6ac\") " pod="calico-system/calico-typha-5966d64d4-vlt9w" Feb 13 19:50:25.976193 kubelet[2512]: I0213 19:50:25.976221 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sdgz\" (UniqueName: \"kubernetes.io/projected/36d298dc-d17e-4f87-91f8-029ae72ec6ac-kube-api-access-6sdgz\") pod \"calico-typha-5966d64d4-vlt9w\" (UID: \"36d298dc-d17e-4f87-91f8-029ae72ec6ac\") " pod="calico-system/calico-typha-5966d64d4-vlt9w" Feb 13 19:50:26.046763 systemd[1]: Created slice kubepods-besteffort-podb21d5cce_88cd_478b_b123_4ff08ac7f29e.slice - libcontainer container kubepods-besteffort-podb21d5cce_88cd_478b_b123_4ff08ac7f29e.slice. Feb 13 19:50:26.077187 kubelet[2512]: I0213 19:50:26.077126 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-bin-dir\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077364 kubelet[2512]: I0213 19:50:26.077215 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b21d5cce-88cd-478b-b123-4ff08ac7f29e-node-certs\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077364 kubelet[2512]: I0213 19:50:26.077245 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-var-run-calico\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077364 kubelet[2512]: I0213 19:50:26.077272 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-log-dir\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077364 kubelet[2512]: I0213 19:50:26.077293 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22x7h\" (UniqueName: \"kubernetes.io/projected/b21d5cce-88cd-478b-b123-4ff08ac7f29e-kube-api-access-22x7h\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077364 kubelet[2512]: I0213 19:50:26.077323 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-net-dir\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077499 kubelet[2512]: I0213 19:50:26.077350 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21d5cce-88cd-478b-b123-4ff08ac7f29e-tigera-ca-bundle\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077499 kubelet[2512]: I0213 19:50:26.077383 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-lib-modules\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077499 kubelet[2512]: I0213 19:50:26.077406 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-xtables-lock\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077499 kubelet[2512]: I0213 19:50:26.077426 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-flexvol-driver-host\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077499 kubelet[2512]: I0213 19:50:26.077460 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-var-lib-calico\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.077660 kubelet[2512]: I0213 19:50:26.077480 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-policysync\") pod \"calico-node-rpv6b\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " pod="calico-system/calico-node-rpv6b" Feb 13 19:50:26.147665 kubelet[2512]: E0213 19:50:26.147606 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:26.178911 kubelet[2512]: I0213 19:50:26.178474 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd-socket-dir\") pod \"csi-node-driver-g7d74\" (UID: \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\") " pod="calico-system/csi-node-driver-g7d74" Feb 13 19:50:26.178911 kubelet[2512]: I0213 19:50:26.178549 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd-registration-dir\") pod \"csi-node-driver-g7d74\" (UID: \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\") " pod="calico-system/csi-node-driver-g7d74" Feb 13 19:50:26.178911 kubelet[2512]: I0213 19:50:26.178865 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd-kubelet-dir\") pod \"csi-node-driver-g7d74\" (UID: \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\") " pod="calico-system/csi-node-driver-g7d74" Feb 13 19:50:26.179150 kubelet[2512]: I0213 19:50:26.178945 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lzmw\" (UniqueName: \"kubernetes.io/projected/171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd-kube-api-access-2lzmw\") pod \"csi-node-driver-g7d74\" (UID: \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\") " pod="calico-system/csi-node-driver-g7d74" Feb 13 19:50:26.179150 kubelet[2512]: I0213 19:50:26.179065 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd-varrun\") pod \"csi-node-driver-g7d74\" (UID: \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\") " pod="calico-system/csi-node-driver-g7d74" Feb 13 19:50:26.181797 kubelet[2512]: E0213 19:50:26.181767 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.182787 kubelet[2512]: W0213 19:50:26.181941 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.182787 kubelet[2512]: E0213 19:50:26.181979 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.186772 kubelet[2512]: E0213 19:50:26.186735 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.186772 kubelet[2512]: W0213 19:50:26.186760 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.186897 kubelet[2512]: E0213 19:50:26.186783 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.193437 kubelet[2512]: E0213 19:50:26.193287 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.193437 kubelet[2512]: W0213 19:50:26.193310 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.193437 kubelet[2512]: E0213 19:50:26.193332 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.241057 kubelet[2512]: E0213 19:50:26.241020 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:26.241747 containerd[1460]: time="2025-02-13T19:50:26.241694751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5966d64d4-vlt9w,Uid:36d298dc-d17e-4f87-91f8-029ae72ec6ac,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:26.270767 containerd[1460]: time="2025-02-13T19:50:26.269950362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:26.270767 containerd[1460]: time="2025-02-13T19:50:26.270048349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:26.270767 containerd[1460]: time="2025-02-13T19:50:26.270076838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:26.270767 containerd[1460]: time="2025-02-13T19:50:26.270174546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:26.282899 kubelet[2512]: E0213 19:50:26.282341 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.282899 kubelet[2512]: W0213 19:50:26.282371 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.282899 kubelet[2512]: E0213 19:50:26.282398 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.285645 kubelet[2512]: E0213 19:50:26.283264 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.285645 kubelet[2512]: W0213 19:50:26.283281 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.285645 kubelet[2512]: E0213 19:50:26.283394 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.287004 kubelet[2512]: E0213 19:50:26.286963 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.287072 kubelet[2512]: W0213 19:50:26.287003 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.287110 kubelet[2512]: E0213 19:50:26.287073 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.288550 kubelet[2512]: E0213 19:50:26.287562 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.288550 kubelet[2512]: W0213 19:50:26.287580 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.288550 kubelet[2512]: E0213 19:50:26.287697 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.288550 kubelet[2512]: E0213 19:50:26.287934 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.288550 kubelet[2512]: W0213 19:50:26.287945 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.288550 kubelet[2512]: E0213 19:50:26.288031 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.288550 kubelet[2512]: E0213 19:50:26.288314 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.288550 kubelet[2512]: W0213 19:50:26.288323 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.288550 kubelet[2512]: E0213 19:50:26.288462 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.290618 kubelet[2512]: E0213 19:50:26.290464 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.290981 kubelet[2512]: W0213 19:50:26.290937 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.290981 kubelet[2512]: E0213 19:50:26.290957 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.291849 kubelet[2512]: E0213 19:50:26.291712 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.291849 kubelet[2512]: W0213 19:50:26.291840 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.294035 kubelet[2512]: E0213 19:50:26.291940 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.294035 kubelet[2512]: E0213 19:50:26.292654 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.294035 kubelet[2512]: W0213 19:50:26.292665 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.294035 kubelet[2512]: E0213 19:50:26.293282 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.295072 kubelet[2512]: E0213 19:50:26.294568 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.295072 kubelet[2512]: W0213 19:50:26.294581 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.295072 kubelet[2512]: E0213 19:50:26.294610 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.295594 kubelet[2512]: E0213 19:50:26.295413 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.295594 kubelet[2512]: W0213 19:50:26.295430 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.295790 systemd[1]: Started cri-containerd-fe5f39be119f67e49d05afcc3dd67493ec940c398cb9bbfa0bae68f949854a09.scope - libcontainer container fe5f39be119f67e49d05afcc3dd67493ec940c398cb9bbfa0bae68f949854a09. Feb 13 19:50:26.296174 kubelet[2512]: E0213 19:50:26.296041 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.296174 kubelet[2512]: W0213 19:50:26.296055 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.296174 kubelet[2512]: E0213 19:50:26.296078 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.296174 kubelet[2512]: E0213 19:50:26.296115 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.296504 kubelet[2512]: E0213 19:50:26.296474 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.296504 kubelet[2512]: W0213 19:50:26.296491 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.296903 kubelet[2512]: E0213 19:50:26.296774 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.297161 kubelet[2512]: E0213 19:50:26.297124 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.297346 kubelet[2512]: W0213 19:50:26.297137 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.297346 kubelet[2512]: E0213 19:50:26.297252 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.297552 kubelet[2512]: E0213 19:50:26.297498 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.297552 kubelet[2512]: W0213 19:50:26.297510 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.298285 kubelet[2512]: E0213 19:50:26.297888 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.299461 kubelet[2512]: E0213 19:50:26.299166 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.299461 kubelet[2512]: W0213 19:50:26.299180 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.299788 kubelet[2512]: E0213 19:50:26.299666 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.299993 kubelet[2512]: E0213 19:50:26.299892 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.300266 kubelet[2512]: W0213 19:50:26.300051 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.300640 kubelet[2512]: E0213 19:50:26.300331 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.300640 kubelet[2512]: W0213 19:50:26.300549 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.300640 kubelet[2512]: E0213 19:50:26.300613 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.300760 kubelet[2512]: E0213 19:50:26.300687 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.301651 kubelet[2512]: E0213 19:50:26.301260 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.301651 kubelet[2512]: W0213 19:50:26.301278 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.301651 kubelet[2512]: E0213 19:50:26.301292 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.301758 kubelet[2512]: E0213 19:50:26.301695 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.301758 kubelet[2512]: W0213 19:50:26.301706 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.301841 kubelet[2512]: E0213 19:50:26.301757 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.302316 kubelet[2512]: E0213 19:50:26.302128 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.302316 kubelet[2512]: W0213 19:50:26.302169 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.302316 kubelet[2512]: E0213 19:50:26.302190 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.302649 kubelet[2512]: E0213 19:50:26.302597 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.302649 kubelet[2512]: W0213 19:50:26.302610 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.302854 kubelet[2512]: E0213 19:50:26.302736 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.303506 kubelet[2512]: E0213 19:50:26.303449 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.303506 kubelet[2512]: W0213 19:50:26.303463 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.303506 kubelet[2512]: E0213 19:50:26.303479 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.305196 kubelet[2512]: E0213 19:50:26.304726 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.305196 kubelet[2512]: W0213 19:50:26.304750 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.305196 kubelet[2512]: E0213 19:50:26.304763 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.305290 kubelet[2512]: E0213 19:50:26.305274 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.305290 kubelet[2512]: W0213 19:50:26.305285 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.305361 kubelet[2512]: E0213 19:50:26.305297 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.306428 kubelet[2512]: E0213 19:50:26.306398 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.306505 kubelet[2512]: W0213 19:50:26.306486 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.306566 kubelet[2512]: E0213 19:50:26.306508 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.348420 containerd[1460]: time="2025-02-13T19:50:26.348357236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5966d64d4-vlt9w,Uid:36d298dc-d17e-4f87-91f8-029ae72ec6ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe5f39be119f67e49d05afcc3dd67493ec940c398cb9bbfa0bae68f949854a09\"" Feb 13 19:50:26.349333 kubelet[2512]: E0213 19:50:26.349313 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:26.350936 containerd[1460]: time="2025-02-13T19:50:26.350678036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:50:26.352495 kubelet[2512]: E0213 19:50:26.352463 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:26.353000 containerd[1460]: time="2025-02-13T19:50:26.352925720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rpv6b,Uid:b21d5cce-88cd-478b-b123-4ff08ac7f29e,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:26.421968 containerd[1460]: time="2025-02-13T19:50:26.421608140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:26.421968 containerd[1460]: time="2025-02-13T19:50:26.421713170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:26.421968 containerd[1460]: time="2025-02-13T19:50:26.421725001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:26.421968 containerd[1460]: time="2025-02-13T19:50:26.421843524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:26.447675 systemd[1]: Started cri-containerd-c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7.scope - libcontainer container c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7. Feb 13 19:50:26.473134 containerd[1460]: time="2025-02-13T19:50:26.473097970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rpv6b,Uid:b21d5cce-88cd-478b-b123-4ff08ac7f29e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\"" Feb 13 19:50:26.474285 kubelet[2512]: E0213 19:50:26.474088 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:26.894451 kubelet[2512]: E0213 19:50:26.894356 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:26.976002 kubelet[2512]: E0213 19:50:26.975961 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.976002 kubelet[2512]: W0213 19:50:26.975988 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.976002 kubelet[2512]: E0213 19:50:26.976014 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.976640 kubelet[2512]: E0213 19:50:26.976233 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.976640 kubelet[2512]: W0213 19:50:26.976241 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.976640 kubelet[2512]: E0213 19:50:26.976257 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.976640 kubelet[2512]: E0213 19:50:26.976490 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.976640 kubelet[2512]: W0213 19:50:26.976499 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.976640 kubelet[2512]: E0213 19:50:26.976507 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.976901 kubelet[2512]: E0213 19:50:26.976831 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.976901 kubelet[2512]: W0213 19:50:26.976849 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.976901 kubelet[2512]: E0213 19:50:26.976858 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.977137 kubelet[2512]: E0213 19:50:26.977122 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.977137 kubelet[2512]: W0213 19:50:26.977133 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.977203 kubelet[2512]: E0213 19:50:26.977142 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:27.618421 kubelet[2512]: E0213 19:50:27.618355 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:27.836910 kubelet[2512]: E0213 19:50:27.836865 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:27.884428 kubelet[2512]: E0213 19:50:27.884283 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:27.884428 kubelet[2512]: W0213 19:50:27.884316 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:27.884428 kubelet[2512]: E0213 19:50:27.884342 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:27.884734 kubelet[2512]: E0213 19:50:27.884704 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:27.884734 kubelet[2512]: W0213 19:50:27.884720 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:27.884734 kubelet[2512]: E0213 19:50:27.884732 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:27.885037 kubelet[2512]: E0213 19:50:27.885002 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:27.885037 kubelet[2512]: W0213 19:50:27.885015 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:27.885037 kubelet[2512]: E0213 19:50:27.885028 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:27.885290 kubelet[2512]: E0213 19:50:27.885264 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:27.885290 kubelet[2512]: W0213 19:50:27.885278 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:27.885290 kubelet[2512]: E0213 19:50:27.885288 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:27.885598 kubelet[2512]: E0213 19:50:27.885577 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:27.885598 kubelet[2512]: W0213 19:50:27.885590 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:27.885598 kubelet[2512]: E0213 19:50:27.885599 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.355265 kubelet[2512]: E0213 19:50:28.355180 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:28.389598 kubelet[2512]: E0213 19:50:28.389415 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.389598 kubelet[2512]: W0213 19:50:28.389579 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.389797 kubelet[2512]: E0213 19:50:28.389623 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.389877 kubelet[2512]: E0213 19:50:28.389859 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.389877 kubelet[2512]: W0213 19:50:28.389871 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.389958 kubelet[2512]: E0213 19:50:28.389884 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.390131 kubelet[2512]: E0213 19:50:28.390107 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.390131 kubelet[2512]: W0213 19:50:28.390119 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.390131 kubelet[2512]: E0213 19:50:28.390129 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.390359 kubelet[2512]: E0213 19:50:28.390343 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.390359 kubelet[2512]: W0213 19:50:28.390354 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.390454 kubelet[2512]: E0213 19:50:28.390364 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.390637 kubelet[2512]: E0213 19:50:28.390614 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.390637 kubelet[2512]: W0213 19:50:28.390628 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.390883 kubelet[2512]: E0213 19:50:28.390638 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.390883 kubelet[2512]: E0213 19:50:28.390853 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.390883 kubelet[2512]: W0213 19:50:28.390863 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.390883 kubelet[2512]: E0213 19:50:28.390873 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.391120 kubelet[2512]: E0213 19:50:28.391081 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.391120 kubelet[2512]: W0213 19:50:28.391107 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.391120 kubelet[2512]: E0213 19:50:28.391119 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.391354 kubelet[2512]: E0213 19:50:28.391323 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.391354 kubelet[2512]: W0213 19:50:28.391343 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.391354 kubelet[2512]: E0213 19:50:28.391352 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.391583 kubelet[2512]: E0213 19:50:28.391567 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.391583 kubelet[2512]: W0213 19:50:28.391578 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.391687 kubelet[2512]: E0213 19:50:28.391597 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.391823 kubelet[2512]: E0213 19:50:28.391807 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.391823 kubelet[2512]: W0213 19:50:28.391819 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.391921 kubelet[2512]: E0213 19:50:28.391829 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.392064 kubelet[2512]: E0213 19:50:28.392042 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.392064 kubelet[2512]: W0213 19:50:28.392053 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.392064 kubelet[2512]: E0213 19:50:28.392063 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.392324 kubelet[2512]: E0213 19:50:28.392300 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.392324 kubelet[2512]: W0213 19:50:28.392313 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.392324 kubelet[2512]: E0213 19:50:28.392322 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.392569 kubelet[2512]: E0213 19:50:28.392547 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.392569 kubelet[2512]: W0213 19:50:28.392558 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.392569 kubelet[2512]: E0213 19:50:28.392568 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.392805 kubelet[2512]: E0213 19:50:28.392784 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.392805 kubelet[2512]: W0213 19:50:28.392795 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.392805 kubelet[2512]: E0213 19:50:28.392804 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:28.393026 kubelet[2512]: E0213 19:50:28.393005 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:28.393026 kubelet[2512]: W0213 19:50:28.393015 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:28.393026 kubelet[2512]: E0213 19:50:28.393025 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:29.229882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount232093651.mount: Deactivated successfully. Feb 13 19:50:29.618925 kubelet[2512]: E0213 19:50:29.618855 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:31.515255 containerd[1460]: time="2025-02-13T19:50:31.515176162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:31.516147 containerd[1460]: time="2025-02-13T19:50:31.516059757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 19:50:31.517454 containerd[1460]: time="2025-02-13T19:50:31.517411434Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:31.519749 containerd[1460]: time="2025-02-13T19:50:31.519705940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:31.520551 containerd[1460]: time="2025-02-13T19:50:31.520468371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 5.169739398s" Feb 13 19:50:31.520551 containerd[1460]: time="2025-02-13T19:50:31.520538295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:50:31.521951 containerd[1460]: time="2025-02-13T19:50:31.521900280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:50:31.530866 containerd[1460]: time="2025-02-13T19:50:31.530815531Z" level=info msg="CreateContainer within sandbox \"fe5f39be119f67e49d05afcc3dd67493ec940c398cb9bbfa0bae68f949854a09\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:50:31.546931 containerd[1460]: time="2025-02-13T19:50:31.546874037Z" level=info msg="CreateContainer within sandbox \"fe5f39be119f67e49d05afcc3dd67493ec940c398cb9bbfa0bae68f949854a09\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f4d26fab90046c92fc3a4b591909d870153bf0336b0055bb02a04d27fa0d3b37\"" Feb 13 19:50:31.547555 containerd[1460]: time="2025-02-13T19:50:31.547487616Z" level=info msg="StartContainer for \"f4d26fab90046c92fc3a4b591909d870153bf0336b0055bb02a04d27fa0d3b37\"" Feb 13 19:50:31.580736 systemd[1]: Started cri-containerd-f4d26fab90046c92fc3a4b591909d870153bf0336b0055bb02a04d27fa0d3b37.scope - libcontainer container f4d26fab90046c92fc3a4b591909d870153bf0336b0055bb02a04d27fa0d3b37. Feb 13 19:50:31.618891 kubelet[2512]: E0213 19:50:31.618819 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:32.089838 containerd[1460]: time="2025-02-13T19:50:32.089765622Z" level=info msg="StartContainer for \"f4d26fab90046c92fc3a4b591909d870153bf0336b0055bb02a04d27fa0d3b37\" returns successfully" Feb 13 19:50:33.094213 kubelet[2512]: E0213 19:50:33.094132 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:33.123164 kubelet[2512]: E0213 19:50:33.123110 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.123164 kubelet[2512]: W0213 19:50:33.123155 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.123341 kubelet[2512]: E0213 19:50:33.123183 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.123475 kubelet[2512]: E0213 19:50:33.123459 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.123475 kubelet[2512]: W0213 19:50:33.123472 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.123568 kubelet[2512]: E0213 19:50:33.123482 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.123728 kubelet[2512]: E0213 19:50:33.123706 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.123728 kubelet[2512]: W0213 19:50:33.123717 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.123728 kubelet[2512]: E0213 19:50:33.123724 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.124113 kubelet[2512]: E0213 19:50:33.124092 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.124113 kubelet[2512]: W0213 19:50:33.124103 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.124113 kubelet[2512]: E0213 19:50:33.124113 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.124348 kubelet[2512]: E0213 19:50:33.124329 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.124348 kubelet[2512]: W0213 19:50:33.124340 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.124348 kubelet[2512]: E0213 19:50:33.124348 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.124736 kubelet[2512]: E0213 19:50:33.124577 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.124736 kubelet[2512]: W0213 19:50:33.124598 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.124736 kubelet[2512]: E0213 19:50:33.124609 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.124961 kubelet[2512]: E0213 19:50:33.124863 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.124961 kubelet[2512]: W0213 19:50:33.124898 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.124961 kubelet[2512]: E0213 19:50:33.124920 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.125255 kubelet[2512]: E0213 19:50:33.125132 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.125255 kubelet[2512]: W0213 19:50:33.125144 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.125255 kubelet[2512]: E0213 19:50:33.125151 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.125394 kubelet[2512]: E0213 19:50:33.125374 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.125394 kubelet[2512]: W0213 19:50:33.125388 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.125469 kubelet[2512]: E0213 19:50:33.125400 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.125663 kubelet[2512]: E0213 19:50:33.125645 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.125663 kubelet[2512]: W0213 19:50:33.125659 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.125893 kubelet[2512]: E0213 19:50:33.125668 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.125893 kubelet[2512]: E0213 19:50:33.125868 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.125893 kubelet[2512]: W0213 19:50:33.125888 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.126005 kubelet[2512]: E0213 19:50:33.125899 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.126119 kubelet[2512]: E0213 19:50:33.126080 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.126119 kubelet[2512]: W0213 19:50:33.126092 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.126119 kubelet[2512]: E0213 19:50:33.126101 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.126297 kubelet[2512]: E0213 19:50:33.126281 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.126297 kubelet[2512]: W0213 19:50:33.126291 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.126297 kubelet[2512]: E0213 19:50:33.126298 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.126487 kubelet[2512]: E0213 19:50:33.126467 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.126487 kubelet[2512]: W0213 19:50:33.126480 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.126487 kubelet[2512]: E0213 19:50:33.126488 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.126742 kubelet[2512]: E0213 19:50:33.126725 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.126742 kubelet[2512]: W0213 19:50:33.126736 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.126742 kubelet[2512]: E0213 19:50:33.126744 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.143324 kubelet[2512]: E0213 19:50:33.143278 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.143324 kubelet[2512]: W0213 19:50:33.143313 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.143478 kubelet[2512]: E0213 19:50:33.143347 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.143763 kubelet[2512]: E0213 19:50:33.143748 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.143763 kubelet[2512]: W0213 19:50:33.143761 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.143821 kubelet[2512]: E0213 19:50:33.143777 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.144070 kubelet[2512]: E0213 19:50:33.144043 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.144070 kubelet[2512]: W0213 19:50:33.144056 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.144070 kubelet[2512]: E0213 19:50:33.144073 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.144321 kubelet[2512]: E0213 19:50:33.144298 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.144321 kubelet[2512]: W0213 19:50:33.144317 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.144409 kubelet[2512]: E0213 19:50:33.144333 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.144619 kubelet[2512]: E0213 19:50:33.144602 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.144619 kubelet[2512]: W0213 19:50:33.144613 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.144698 kubelet[2512]: E0213 19:50:33.144629 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.144860 kubelet[2512]: E0213 19:50:33.144834 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.144860 kubelet[2512]: W0213 19:50:33.144846 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.144860 kubelet[2512]: E0213 19:50:33.144857 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.145100 kubelet[2512]: E0213 19:50:33.145084 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.145100 kubelet[2512]: W0213 19:50:33.145094 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.145173 kubelet[2512]: E0213 19:50:33.145140 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.145295 kubelet[2512]: E0213 19:50:33.145279 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.145295 kubelet[2512]: W0213 19:50:33.145289 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.145365 kubelet[2512]: E0213 19:50:33.145315 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.145502 kubelet[2512]: E0213 19:50:33.145485 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.145502 kubelet[2512]: W0213 19:50:33.145495 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.145655 kubelet[2512]: E0213 19:50:33.145509 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.145745 kubelet[2512]: E0213 19:50:33.145729 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.145745 kubelet[2512]: W0213 19:50:33.145739 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.145811 kubelet[2512]: E0213 19:50:33.145751 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.145962 kubelet[2512]: E0213 19:50:33.145946 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.145962 kubelet[2512]: W0213 19:50:33.145956 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.146029 kubelet[2512]: E0213 19:50:33.145968 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.146182 kubelet[2512]: E0213 19:50:33.146167 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.146182 kubelet[2512]: W0213 19:50:33.146177 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.146257 kubelet[2512]: E0213 19:50:33.146189 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.146564 kubelet[2512]: E0213 19:50:33.146542 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.146564 kubelet[2512]: W0213 19:50:33.146562 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.146664 kubelet[2512]: E0213 19:50:33.146588 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.146831 kubelet[2512]: E0213 19:50:33.146817 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.146831 kubelet[2512]: W0213 19:50:33.146828 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.146916 kubelet[2512]: E0213 19:50:33.146860 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.147076 kubelet[2512]: E0213 19:50:33.147061 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.147076 kubelet[2512]: W0213 19:50:33.147073 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.147164 kubelet[2512]: E0213 19:50:33.147100 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.147302 kubelet[2512]: E0213 19:50:33.147287 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.147302 kubelet[2512]: W0213 19:50:33.147298 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.147382 kubelet[2512]: E0213 19:50:33.147314 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.147665 kubelet[2512]: E0213 19:50:33.147645 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.147665 kubelet[2512]: W0213 19:50:33.147656 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.147746 kubelet[2512]: E0213 19:50:33.147667 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.148209 kubelet[2512]: E0213 19:50:33.148185 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:33.148209 kubelet[2512]: W0213 19:50:33.148198 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:33.148209 kubelet[2512]: E0213 19:50:33.148208 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:33.618439 kubelet[2512]: E0213 19:50:33.618368 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:34.001789 containerd[1460]: time="2025-02-13T19:50:34.001600681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.023599 containerd[1460]: time="2025-02-13T19:50:34.023463674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 19:50:34.040839 containerd[1460]: time="2025-02-13T19:50:34.040760270Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.049550 containerd[1460]: time="2025-02-13T19:50:34.049496555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.050174 containerd[1460]: time="2025-02-13T19:50:34.050132998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.528195693s" Feb 13 19:50:34.050213 containerd[1460]: time="2025-02-13T19:50:34.050175133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:50:34.062237 containerd[1460]: time="2025-02-13T19:50:34.062170947Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:50:34.095175 kubelet[2512]: I0213 19:50:34.095124 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:50:34.095687 kubelet[2512]: E0213 19:50:34.095476 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:34.132005 kubelet[2512]: E0213 19:50:34.131966 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.132005 kubelet[2512]: W0213 19:50:34.131996 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.132147 kubelet[2512]: E0213 19:50:34.132024 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.132366 kubelet[2512]: E0213 19:50:34.132329 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.132366 kubelet[2512]: W0213 19:50:34.132355 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.132366 kubelet[2512]: E0213 19:50:34.132380 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.132697 kubelet[2512]: E0213 19:50:34.132681 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.132697 kubelet[2512]: W0213 19:50:34.132693 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.132774 kubelet[2512]: E0213 19:50:34.132714 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.132966 kubelet[2512]: E0213 19:50:34.132944 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.133002 kubelet[2512]: W0213 19:50:34.132971 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.133002 kubelet[2512]: E0213 19:50:34.132981 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.133254 kubelet[2512]: E0213 19:50:34.133240 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.133254 kubelet[2512]: W0213 19:50:34.133251 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.133323 kubelet[2512]: E0213 19:50:34.133258 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.133483 kubelet[2512]: E0213 19:50:34.133467 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.133548 kubelet[2512]: W0213 19:50:34.133479 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.133548 kubelet[2512]: E0213 19:50:34.133498 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.133762 kubelet[2512]: E0213 19:50:34.133738 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.133762 kubelet[2512]: W0213 19:50:34.133759 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.133828 kubelet[2512]: E0213 19:50:34.133767 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.133998 kubelet[2512]: E0213 19:50:34.133984 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.133998 kubelet[2512]: W0213 19:50:34.133995 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.134069 kubelet[2512]: E0213 19:50:34.134002 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.134262 kubelet[2512]: E0213 19:50:34.134227 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.134262 kubelet[2512]: W0213 19:50:34.134248 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.134262 kubelet[2512]: E0213 19:50:34.134258 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.134546 kubelet[2512]: E0213 19:50:34.134510 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.134573 kubelet[2512]: W0213 19:50:34.134547 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.134573 kubelet[2512]: E0213 19:50:34.134562 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.135180 kubelet[2512]: E0213 19:50:34.134796 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.135180 kubelet[2512]: W0213 19:50:34.134811 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.135180 kubelet[2512]: E0213 19:50:34.134823 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.135180 kubelet[2512]: E0213 19:50:34.135074 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.135180 kubelet[2512]: W0213 19:50:34.135085 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.135180 kubelet[2512]: E0213 19:50:34.135095 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.135334 kubelet[2512]: E0213 19:50:34.135308 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.135334 kubelet[2512]: W0213 19:50:34.135319 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.135334 kubelet[2512]: E0213 19:50:34.135329 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.135702 kubelet[2512]: E0213 19:50:34.135683 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.135702 kubelet[2512]: W0213 19:50:34.135697 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.135806 kubelet[2512]: E0213 19:50:34.135708 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.135938 kubelet[2512]: E0213 19:50:34.135913 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.135938 kubelet[2512]: W0213 19:50:34.135924 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.135938 kubelet[2512]: E0213 19:50:34.135932 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.150233 kubelet[2512]: E0213 19:50:34.150211 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.150233 kubelet[2512]: W0213 19:50:34.150226 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.150347 kubelet[2512]: E0213 19:50:34.150239 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.150470 kubelet[2512]: E0213 19:50:34.150453 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.150470 kubelet[2512]: W0213 19:50:34.150465 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.150533 kubelet[2512]: E0213 19:50:34.150481 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.150720 kubelet[2512]: E0213 19:50:34.150702 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.150720 kubelet[2512]: W0213 19:50:34.150717 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.150792 kubelet[2512]: E0213 19:50:34.150733 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.150948 kubelet[2512]: E0213 19:50:34.150932 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.150948 kubelet[2512]: W0213 19:50:34.150945 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.150996 kubelet[2512]: E0213 19:50:34.150958 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.151159 kubelet[2512]: E0213 19:50:34.151143 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.151159 kubelet[2512]: W0213 19:50:34.151154 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.151219 kubelet[2512]: E0213 19:50:34.151166 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.151393 kubelet[2512]: E0213 19:50:34.151374 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.151393 kubelet[2512]: W0213 19:50:34.151388 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.151442 kubelet[2512]: E0213 19:50:34.151405 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.151663 kubelet[2512]: E0213 19:50:34.151646 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.151663 kubelet[2512]: W0213 19:50:34.151660 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.151728 kubelet[2512]: E0213 19:50:34.151677 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.151913 kubelet[2512]: E0213 19:50:34.151894 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.151913 kubelet[2512]: W0213 19:50:34.151908 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.151971 kubelet[2512]: E0213 19:50:34.151923 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.152135 kubelet[2512]: E0213 19:50:34.152118 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.152135 kubelet[2512]: W0213 19:50:34.152131 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.152186 kubelet[2512]: E0213 19:50:34.152144 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.152365 kubelet[2512]: E0213 19:50:34.152348 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.152365 kubelet[2512]: W0213 19:50:34.152361 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.152418 kubelet[2512]: E0213 19:50:34.152390 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.152620 kubelet[2512]: E0213 19:50:34.152601 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.152620 kubelet[2512]: W0213 19:50:34.152615 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.152680 kubelet[2512]: E0213 19:50:34.152644 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.152829 kubelet[2512]: E0213 19:50:34.152810 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.152829 kubelet[2512]: W0213 19:50:34.152825 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.152877 kubelet[2512]: E0213 19:50:34.152843 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.153043 kubelet[2512]: E0213 19:50:34.153025 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.153043 kubelet[2512]: W0213 19:50:34.153038 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.153098 kubelet[2512]: E0213 19:50:34.153051 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.153301 kubelet[2512]: E0213 19:50:34.153282 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.153301 kubelet[2512]: W0213 19:50:34.153296 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.153352 kubelet[2512]: E0213 19:50:34.153311 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.153508 kubelet[2512]: E0213 19:50:34.153489 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.153508 kubelet[2512]: W0213 19:50:34.153503 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.153578 kubelet[2512]: E0213 19:50:34.153530 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.153793 kubelet[2512]: E0213 19:50:34.153775 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.153793 kubelet[2512]: W0213 19:50:34.153788 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.153842 kubelet[2512]: E0213 19:50:34.153804 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.154078 kubelet[2512]: E0213 19:50:34.154051 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.154078 kubelet[2512]: W0213 19:50:34.154066 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.154134 kubelet[2512]: E0213 19:50:34.154081 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.154289 kubelet[2512]: E0213 19:50:34.154271 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.154289 kubelet[2512]: W0213 19:50:34.154284 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.154332 kubelet[2512]: E0213 19:50:34.154294 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.180978 containerd[1460]: time="2025-02-13T19:50:34.180920355Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0\"" Feb 13 19:50:34.181577 containerd[1460]: time="2025-02-13T19:50:34.181552069Z" level=info msg="StartContainer for \"4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0\"" Feb 13 19:50:34.209115 systemd[1]: run-containerd-runc-k8s.io-4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0-runc.E3BW1U.mount: Deactivated successfully. Feb 13 19:50:34.222718 systemd[1]: Started cri-containerd-4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0.scope - libcontainer container 4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0. Feb 13 19:50:34.262933 containerd[1460]: time="2025-02-13T19:50:34.262765767Z" level=info msg="StartContainer for \"4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0\" returns successfully" Feb 13 19:50:34.270191 systemd[1]: cri-containerd-4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0.scope: Deactivated successfully. Feb 13 19:50:34.295459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0-rootfs.mount: Deactivated successfully. Feb 13 19:50:34.307316 containerd[1460]: time="2025-02-13T19:50:34.307253026Z" level=info msg="shim disconnected" id=4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0 namespace=k8s.io Feb 13 19:50:34.307316 containerd[1460]: time="2025-02-13T19:50:34.307309747Z" level=warning msg="cleaning up after shim disconnected" id=4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0 namespace=k8s.io Feb 13 19:50:34.307316 containerd[1460]: time="2025-02-13T19:50:34.307320727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:34.433454 kubelet[2512]: I0213 19:50:34.432436 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5966d64d4-vlt9w" podStartSLOduration=4.261079966 podStartE2EDuration="9.432415575s" podCreationTimestamp="2025-02-13 19:50:25 +0000 UTC" firstStartedPulling="2025-02-13 19:50:26.350345395 +0000 UTC m=+11.831949857" lastFinishedPulling="2025-02-13 19:50:31.521680984 +0000 UTC m=+17.003285466" observedRunningTime="2025-02-13 19:50:33.110713177 +0000 UTC m=+18.592317639" watchObservedRunningTime="2025-02-13 19:50:34.432415575 +0000 UTC m=+19.914020037" Feb 13 19:50:35.098018 kubelet[2512]: E0213 19:50:35.097978 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:35.098493 kubelet[2512]: E0213 19:50:35.098142 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:35.098922 containerd[1460]: time="2025-02-13T19:50:35.098876207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:50:35.618304 kubelet[2512]: E0213 19:50:35.618220 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:36.099879 kubelet[2512]: E0213 19:50:36.099850 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:37.619238 kubelet[2512]: E0213 19:50:37.619033 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:39.618761 kubelet[2512]: E0213 19:50:39.618684 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:41.805424 kubelet[2512]: E0213 19:50:41.805358 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:43.097327 containerd[1460]: time="2025-02-13T19:50:43.097245932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:43.156781 containerd[1460]: time="2025-02-13T19:50:43.156686754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:50:43.225071 containerd[1460]: time="2025-02-13T19:50:43.224961626Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:43.291887 containerd[1460]: time="2025-02-13T19:50:43.291814729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:43.292592 containerd[1460]: time="2025-02-13T19:50:43.292511528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 8.193584691s" Feb 13 19:50:43.292592 containerd[1460]: time="2025-02-13T19:50:43.292560067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:50:43.329111 containerd[1460]: time="2025-02-13T19:50:43.329055923Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:50:43.437270 containerd[1460]: time="2025-02-13T19:50:43.437113961Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313\"" Feb 13 19:50:43.440982 containerd[1460]: time="2025-02-13T19:50:43.440955759Z" level=info msg="StartContainer for \"78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313\"" Feb 13 19:50:43.475817 systemd[1]: Started cri-containerd-78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313.scope - libcontainer container 78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313. Feb 13 19:50:43.815296 kubelet[2512]: E0213 19:50:43.815247 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:43.860223 containerd[1460]: time="2025-02-13T19:50:43.860137989Z" level=info msg="StartContainer for \"78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313\" returns successfully" Feb 13 19:50:44.292553 kubelet[2512]: E0213 19:50:44.292491 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:45.294017 kubelet[2512]: E0213 19:50:45.293971 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:45.730730 kubelet[2512]: E0213 19:50:45.730599 2512 kubelet.go:2579] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.112s" Feb 13 19:50:45.730849 kubelet[2512]: E0213 19:50:45.730809 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:46.461215 systemd[1]: cri-containerd-78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313.scope: Deactivated successfully. Feb 13 19:50:46.488112 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:48964.service - OpenSSH per-connection server daemon (10.0.0.1:48964). Feb 13 19:50:46.495087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313-rootfs.mount: Deactivated successfully. Feb 13 19:50:46.499054 containerd[1460]: time="2025-02-13T19:50:46.498890950Z" level=info msg="shim disconnected" id=78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313 namespace=k8s.io Feb 13 19:50:46.499054 containerd[1460]: time="2025-02-13T19:50:46.498953163Z" level=warning msg="cleaning up after shim disconnected" id=78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313 namespace=k8s.io Feb 13 19:50:46.499054 containerd[1460]: time="2025-02-13T19:50:46.498961449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:46.540699 sshd[3313]: Accepted publickey for core from 10.0.0.1 port 48964 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:50:46.542976 sshd[3313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:46.548655 systemd-logind[1442]: New session 10 of user core. Feb 13 19:50:46.549671 kubelet[2512]: I0213 19:50:46.549646 2512 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:50:46.553857 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:50:46.594120 systemd[1]: Created slice kubepods-burstable-pod737f6164_6397_4855_90ba_00598f17612b.slice - libcontainer container kubepods-burstable-pod737f6164_6397_4855_90ba_00598f17612b.slice. Feb 13 19:50:46.603750 systemd[1]: Created slice kubepods-besteffort-pod91218b03_c787_48a1_bed0_596bf149fa36.slice - libcontainer container kubepods-besteffort-pod91218b03_c787_48a1_bed0_596bf149fa36.slice. Feb 13 19:50:46.615804 systemd[1]: Created slice kubepods-burstable-podd19124ca_1721_4ff8_b4f5_05a576fbbc55.slice - libcontainer container kubepods-burstable-podd19124ca_1721_4ff8_b4f5_05a576fbbc55.slice. Feb 13 19:50:46.626087 systemd[1]: Created slice kubepods-besteffort-pod7f04c116_34a0_411d_a3e2_e79b0fc5cc48.slice - libcontainer container kubepods-besteffort-pod7f04c116_34a0_411d_a3e2_e79b0fc5cc48.slice. Feb 13 19:50:46.632238 kubelet[2512]: I0213 19:50:46.632204 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91218b03-c787-48a1-bed0-596bf149fa36-calico-apiserver-certs\") pod \"calico-apiserver-589bd969f9-rph6v\" (UID: \"91218b03-c787-48a1-bed0-596bf149fa36\") " pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" Feb 13 19:50:46.633029 kubelet[2512]: I0213 19:50:46.632440 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhz2l\" (UniqueName: \"kubernetes.io/projected/9521aaa2-a8f7-4df8-a700-3d246b89217a-kube-api-access-fhz2l\") pod \"calico-apiserver-589bd969f9-h7vkc\" (UID: \"9521aaa2-a8f7-4df8-a700-3d246b89217a\") " pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" Feb 13 19:50:46.633304 kubelet[2512]: I0213 19:50:46.633285 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmmp\" (UniqueName: \"kubernetes.io/projected/737f6164-6397-4855-90ba-00598f17612b-kube-api-access-kcmmp\") pod \"coredns-668d6bf9bc-ht7r8\" (UID: \"737f6164-6397-4855-90ba-00598f17612b\") " pod="kube-system/coredns-668d6bf9bc-ht7r8" Feb 13 19:50:46.633509 kubelet[2512]: I0213 19:50:46.633392 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/737f6164-6397-4855-90ba-00598f17612b-config-volume\") pod \"coredns-668d6bf9bc-ht7r8\" (UID: \"737f6164-6397-4855-90ba-00598f17612b\") " pod="kube-system/coredns-668d6bf9bc-ht7r8" Feb 13 19:50:46.633430 systemd[1]: Created slice kubepods-besteffort-pod9521aaa2_a8f7_4df8_a700_3d246b89217a.slice - libcontainer container kubepods-besteffort-pod9521aaa2_a8f7_4df8_a700_3d246b89217a.slice. Feb 13 19:50:46.634319 kubelet[2512]: I0213 19:50:46.634284 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ftv\" (UniqueName: \"kubernetes.io/projected/91218b03-c787-48a1-bed0-596bf149fa36-kube-api-access-66ftv\") pod \"calico-apiserver-589bd969f9-rph6v\" (UID: \"91218b03-c787-48a1-bed0-596bf149fa36\") " pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" Feb 13 19:50:46.634366 kubelet[2512]: I0213 19:50:46.634328 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19124ca-1721-4ff8-b4f5-05a576fbbc55-config-volume\") pod \"coredns-668d6bf9bc-lkr85\" (UID: \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\") " pod="kube-system/coredns-668d6bf9bc-lkr85" Feb 13 19:50:46.634567 kubelet[2512]: I0213 19:50:46.634405 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f04c116-34a0-411d-a3e2-e79b0fc5cc48-tigera-ca-bundle\") pod \"calico-kube-controllers-59d4c45f-7bhrp\" (UID: \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\") " pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" Feb 13 19:50:46.634620 kubelet[2512]: I0213 19:50:46.634597 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9521aaa2-a8f7-4df8-a700-3d246b89217a-calico-apiserver-certs\") pod \"calico-apiserver-589bd969f9-h7vkc\" (UID: \"9521aaa2-a8f7-4df8-a700-3d246b89217a\") " pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" Feb 13 19:50:46.634663 kubelet[2512]: I0213 19:50:46.634633 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwls\" (UniqueName: \"kubernetes.io/projected/d19124ca-1721-4ff8-b4f5-05a576fbbc55-kube-api-access-kpwls\") pod \"coredns-668d6bf9bc-lkr85\" (UID: \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\") " pod="kube-system/coredns-668d6bf9bc-lkr85" Feb 13 19:50:46.634698 kubelet[2512]: I0213 19:50:46.634664 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrn6x\" (UniqueName: \"kubernetes.io/projected/7f04c116-34a0-411d-a3e2-e79b0fc5cc48-kube-api-access-zrn6x\") pod \"calico-kube-controllers-59d4c45f-7bhrp\" (UID: \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\") " pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" Feb 13 19:50:46.693515 sshd[3313]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:46.698006 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:48964.service: Deactivated successfully. Feb 13 19:50:46.700205 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:50:46.702187 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:50:46.703304 systemd-logind[1442]: Removed session 10. Feb 13 19:50:46.900154 kubelet[2512]: E0213 19:50:46.900115 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:46.901070 containerd[1460]: time="2025-02-13T19:50:46.901020785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ht7r8,Uid:737f6164-6397-4855-90ba-00598f17612b,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:46.908640 containerd[1460]: time="2025-02-13T19:50:46.908597659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589bd969f9-rph6v,Uid:91218b03-c787-48a1-bed0-596bf149fa36,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:50:46.927221 kubelet[2512]: E0213 19:50:46.927051 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:46.927827 containerd[1460]: time="2025-02-13T19:50:46.927775552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lkr85,Uid:d19124ca-1721-4ff8-b4f5-05a576fbbc55,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:46.931903 containerd[1460]: time="2025-02-13T19:50:46.931860821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d4c45f-7bhrp,Uid:7f04c116-34a0-411d-a3e2-e79b0fc5cc48,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:46.939251 containerd[1460]: time="2025-02-13T19:50:46.939195390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589bd969f9-h7vkc,Uid:9521aaa2-a8f7-4df8-a700-3d246b89217a,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:50:47.027481 containerd[1460]: time="2025-02-13T19:50:47.026725229Z" level=error msg="Failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.027481 containerd[1460]: time="2025-02-13T19:50:47.027149206Z" level=error msg="encountered an error cleaning up failed sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.027481 containerd[1460]: time="2025-02-13T19:50:47.027193288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589bd969f9-rph6v,Uid:91218b03-c787-48a1-bed0-596bf149fa36,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.027854 kubelet[2512]: E0213 19:50:47.027478 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.028806 kubelet[2512]: E0213 19:50:47.028768 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" Feb 13 19:50:47.028869 kubelet[2512]: E0213 19:50:47.028818 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" Feb 13 19:50:47.028898 kubelet[2512]: E0213 19:50:47.028876 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589bd969f9-rph6v_calico-apiserver(91218b03-c787-48a1-bed0-596bf149fa36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589bd969f9-rph6v_calico-apiserver(91218b03-c787-48a1-bed0-596bf149fa36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" podUID="91218b03-c787-48a1-bed0-596bf149fa36" Feb 13 19:50:47.034321 containerd[1460]: time="2025-02-13T19:50:47.034261328Z" level=error msg="Failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.034900 containerd[1460]: time="2025-02-13T19:50:47.034874162Z" level=error msg="encountered an error cleaning up failed sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.035004 containerd[1460]: time="2025-02-13T19:50:47.034983572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ht7r8,Uid:737f6164-6397-4855-90ba-00598f17612b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.035334 kubelet[2512]: E0213 19:50:47.035291 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.035391 kubelet[2512]: E0213 19:50:47.035353 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ht7r8" Feb 13 19:50:47.035391 kubelet[2512]: E0213 19:50:47.035374 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ht7r8" Feb 13 19:50:47.035681 kubelet[2512]: E0213 19:50:47.035434 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ht7r8_kube-system(737f6164-6397-4855-90ba-00598f17612b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ht7r8_kube-system(737f6164-6397-4855-90ba-00598f17612b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ht7r8" podUID="737f6164-6397-4855-90ba-00598f17612b" Feb 13 19:50:47.044563 containerd[1460]: time="2025-02-13T19:50:47.044487098Z" level=error msg="Failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.045009 containerd[1460]: time="2025-02-13T19:50:47.044979151Z" level=error msg="encountered an error cleaning up failed sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.045066 containerd[1460]: time="2025-02-13T19:50:47.045038630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lkr85,Uid:d19124ca-1721-4ff8-b4f5-05a576fbbc55,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.045340 kubelet[2512]: E0213 19:50:47.045274 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.045340 kubelet[2512]: E0213 19:50:47.045341 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lkr85" Feb 13 19:50:47.045616 kubelet[2512]: E0213 19:50:47.045407 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lkr85" Feb 13 19:50:47.045616 kubelet[2512]: E0213 19:50:47.045454 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lkr85_kube-system(d19124ca-1721-4ff8-b4f5-05a576fbbc55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lkr85_kube-system(d19124ca-1721-4ff8-b4f5-05a576fbbc55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lkr85" podUID="d19124ca-1721-4ff8-b4f5-05a576fbbc55" Feb 13 19:50:47.065960 containerd[1460]: time="2025-02-13T19:50:47.065895244Z" level=error msg="Failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.066479 containerd[1460]: time="2025-02-13T19:50:47.066440525Z" level=error msg="encountered an error cleaning up failed sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.067396 containerd[1460]: time="2025-02-13T19:50:47.066501686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d4c45f-7bhrp,Uid:7f04c116-34a0-411d-a3e2-e79b0fc5cc48,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.067743 kubelet[2512]: E0213 19:50:47.067692 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.067859 kubelet[2512]: E0213 19:50:47.067760 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" Feb 13 19:50:47.067904 kubelet[2512]: E0213 19:50:47.067867 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" Feb 13 19:50:47.067963 kubelet[2512]: E0213 19:50:47.067919 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59d4c45f-7bhrp_calico-system(7f04c116-34a0-411d-a3e2-e79b0fc5cc48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59d4c45f-7bhrp_calico-system(7f04c116-34a0-411d-a3e2-e79b0fc5cc48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" podUID="7f04c116-34a0-411d-a3e2-e79b0fc5cc48" Feb 13 19:50:47.074200 containerd[1460]: time="2025-02-13T19:50:47.074142527Z" level=error msg="Failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.074687 containerd[1460]: time="2025-02-13T19:50:47.074649547Z" level=error msg="encountered an error cleaning up failed sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.074742 containerd[1460]: time="2025-02-13T19:50:47.074712372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589bd969f9-h7vkc,Uid:9521aaa2-a8f7-4df8-a700-3d246b89217a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.074969 kubelet[2512]: E0213 19:50:47.074928 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.075023 kubelet[2512]: E0213 19:50:47.074986 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" Feb 13 19:50:47.075023 kubelet[2512]: E0213 19:50:47.075008 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" Feb 13 19:50:47.075072 kubelet[2512]: E0213 19:50:47.075047 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589bd969f9-h7vkc_calico-apiserver(9521aaa2-a8f7-4df8-a700-3d246b89217a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589bd969f9-h7vkc_calico-apiserver(9521aaa2-a8f7-4df8-a700-3d246b89217a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" podUID="9521aaa2-a8f7-4df8-a700-3d246b89217a" Feb 13 19:50:47.299853 kubelet[2512]: E0213 19:50:47.299793 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:47.300871 containerd[1460]: time="2025-02-13T19:50:47.300504491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:50:47.301291 kubelet[2512]: I0213 19:50:47.301117 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:50:47.305567 containerd[1460]: time="2025-02-13T19:50:47.303617920Z" level=info msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\"" Feb 13 19:50:47.305567 containerd[1460]: time="2025-02-13T19:50:47.303844355Z" level=info msg="Ensure that sandbox aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6 in task-service has been cleanup successfully" Feb 13 19:50:47.306022 kubelet[2512]: I0213 19:50:47.305993 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:50:47.306915 containerd[1460]: time="2025-02-13T19:50:47.306874601Z" level=info msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\"" Feb 13 19:50:47.307956 containerd[1460]: time="2025-02-13T19:50:47.307925659Z" level=info msg="Ensure that sandbox f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434 in task-service has been cleanup successfully" Feb 13 19:50:47.308021 kubelet[2512]: I0213 19:50:47.308000 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:50:47.308853 containerd[1460]: time="2025-02-13T19:50:47.308828955Z" level=info msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\"" Feb 13 19:50:47.308978 containerd[1460]: time="2025-02-13T19:50:47.308956980Z" level=info msg="Ensure that sandbox 39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b in task-service has been cleanup successfully" Feb 13 19:50:47.316240 kubelet[2512]: I0213 19:50:47.316198 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:50:47.316908 containerd[1460]: time="2025-02-13T19:50:47.316873025Z" level=info msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\"" Feb 13 19:50:47.317220 containerd[1460]: time="2025-02-13T19:50:47.317167695Z" level=info msg="Ensure that sandbox 43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135 in task-service has been cleanup successfully" Feb 13 19:50:47.320569 kubelet[2512]: I0213 19:50:47.320231 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:50:47.321267 containerd[1460]: time="2025-02-13T19:50:47.321228722Z" level=info msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\"" Feb 13 19:50:47.321455 containerd[1460]: time="2025-02-13T19:50:47.321431924Z" level=info msg="Ensure that sandbox c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131 in task-service has been cleanup successfully" Feb 13 19:50:47.361811 containerd[1460]: time="2025-02-13T19:50:47.361740677Z" level=error msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\" failed" error="failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.362099 kubelet[2512]: E0213 19:50:47.362046 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:50:47.362207 kubelet[2512]: E0213 19:50:47.362140 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6"} Feb 13 19:50:47.362257 kubelet[2512]: E0213 19:50:47.362233 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9521aaa2-a8f7-4df8-a700-3d246b89217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:47.362336 kubelet[2512]: E0213 19:50:47.362264 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9521aaa2-a8f7-4df8-a700-3d246b89217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" podUID="9521aaa2-a8f7-4df8-a700-3d246b89217a" Feb 13 19:50:47.368163 containerd[1460]: time="2025-02-13T19:50:47.368083078Z" level=error msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\" failed" error="failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.368500 kubelet[2512]: E0213 19:50:47.368434 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:50:47.368594 kubelet[2512]: E0213 19:50:47.368511 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434"} Feb 13 19:50:47.368641 kubelet[2512]: E0213 19:50:47.368607 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"737f6164-6397-4855-90ba-00598f17612b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:47.368708 kubelet[2512]: E0213 19:50:47.368647 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"737f6164-6397-4855-90ba-00598f17612b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ht7r8" podUID="737f6164-6397-4855-90ba-00598f17612b" Feb 13 19:50:47.369577 containerd[1460]: time="2025-02-13T19:50:47.369536784Z" level=error msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\" failed" error="failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.369723 kubelet[2512]: E0213 19:50:47.369676 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:50:47.369723 kubelet[2512]: E0213 19:50:47.369711 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135"} Feb 13 19:50:47.369801 kubelet[2512]: E0213 19:50:47.369739 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:47.369801 kubelet[2512]: E0213 19:50:47.369763 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lkr85" podUID="d19124ca-1721-4ff8-b4f5-05a576fbbc55" Feb 13 19:50:47.376436 containerd[1460]: time="2025-02-13T19:50:47.376364132Z" level=error msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\" failed" error="failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.376757 kubelet[2512]: E0213 19:50:47.376699 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:50:47.376829 kubelet[2512]: E0213 19:50:47.376776 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b"} Feb 13 19:50:47.376857 kubelet[2512]: E0213 19:50:47.376824 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:47.376929 kubelet[2512]: E0213 19:50:47.376858 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" podUID="7f04c116-34a0-411d-a3e2-e79b0fc5cc48" Feb 13 19:50:47.384717 containerd[1460]: time="2025-02-13T19:50:47.384661607Z" level=error msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\" failed" error="failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.385032 kubelet[2512]: E0213 19:50:47.384968 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:50:47.385117 kubelet[2512]: E0213 19:50:47.385054 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131"} Feb 13 19:50:47.385117 kubelet[2512]: E0213 19:50:47.385101 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91218b03-c787-48a1-bed0-596bf149fa36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:47.385204 kubelet[2512]: E0213 19:50:47.385135 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91218b03-c787-48a1-bed0-596bf149fa36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" podUID="91218b03-c787-48a1-bed0-596bf149fa36" Feb 13 19:50:47.623866 systemd[1]: Created slice kubepods-besteffort-pod171a5a7b_3719_455d_8f5a_9cdf2ea5e0bd.slice - libcontainer container kubepods-besteffort-pod171a5a7b_3719_455d_8f5a_9cdf2ea5e0bd.slice. Feb 13 19:50:47.627196 containerd[1460]: time="2025-02-13T19:50:47.627149210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7d74,Uid:171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:47.702383 containerd[1460]: time="2025-02-13T19:50:47.702271389Z" level=error msg="Failed to destroy network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.703001 containerd[1460]: time="2025-02-13T19:50:47.702947488Z" level=error msg="encountered an error cleaning up failed sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.703075 containerd[1460]: time="2025-02-13T19:50:47.703025241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7d74,Uid:171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.703387 kubelet[2512]: E0213 19:50:47.703304 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:47.703917 kubelet[2512]: E0213 19:50:47.703390 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g7d74" Feb 13 19:50:47.703917 kubelet[2512]: E0213 19:50:47.703428 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g7d74" Feb 13 19:50:47.703917 kubelet[2512]: E0213 19:50:47.703499 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g7d74_calico-system(171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g7d74_calico-system(171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:47.705630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3-shm.mount: Deactivated successfully. Feb 13 19:50:48.323465 kubelet[2512]: I0213 19:50:48.323417 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Feb 13 19:50:48.324223 containerd[1460]: time="2025-02-13T19:50:48.324170910Z" level=info msg="StopPodSandbox for \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\"" Feb 13 19:50:48.324411 containerd[1460]: time="2025-02-13T19:50:48.324388789Z" level=info msg="Ensure that sandbox cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3 in task-service has been cleanup successfully" Feb 13 19:50:48.357944 containerd[1460]: time="2025-02-13T19:50:48.357883484Z" level=error msg="StopPodSandbox for \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\" failed" error="failed to destroy network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:48.358231 kubelet[2512]: E0213 19:50:48.358159 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Feb 13 19:50:48.358231 kubelet[2512]: E0213 19:50:48.358224 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3"} Feb 13 19:50:48.358512 kubelet[2512]: E0213 19:50:48.358260 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:48.358512 kubelet[2512]: E0213 19:50:48.358287 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:50:51.709620 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:44980.service - OpenSSH per-connection server daemon (10.0.0.1:44980). Feb 13 19:50:51.798879 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 44980 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:50:51.801212 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:51.806269 systemd-logind[1442]: New session 11 of user core. Feb 13 19:50:51.810960 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:50:51.971061 sshd[3709]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:51.976724 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:44980.service: Deactivated successfully. Feb 13 19:50:51.979142 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:50:51.980963 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:50:51.982112 systemd-logind[1442]: Removed session 11. Feb 13 19:50:52.728201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2465869070.mount: Deactivated successfully. Feb 13 19:50:53.996000 containerd[1460]: time="2025-02-13T19:50:53.995923408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:54.046638 containerd[1460]: time="2025-02-13T19:50:54.046576332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:50:54.091771 containerd[1460]: time="2025-02-13T19:50:54.091684443Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:54.096182 containerd[1460]: time="2025-02-13T19:50:54.096108913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:54.096626 containerd[1460]: time="2025-02-13T19:50:54.096585714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.796021103s" Feb 13 19:50:54.096680 containerd[1460]: time="2025-02-13T19:50:54.096628433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:50:54.106577 containerd[1460]: time="2025-02-13T19:50:54.106362809Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:50:54.176834 containerd[1460]: time="2025-02-13T19:50:54.176741158Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51\"" Feb 13 19:50:54.177469 containerd[1460]: time="2025-02-13T19:50:54.177434809Z" level=info msg="StartContainer for \"574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51\"" Feb 13 19:50:54.260812 systemd[1]: Started cri-containerd-574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51.scope - libcontainer container 574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51. Feb 13 19:50:54.622267 containerd[1460]: time="2025-02-13T19:50:54.622203101Z" level=info msg="StartContainer for \"574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51\" returns successfully" Feb 13 19:50:54.645739 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:50:54.645907 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:50:54.672544 systemd[1]: cri-containerd-574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51.scope: Deactivated successfully. Feb 13 19:50:54.911593 containerd[1460]: time="2025-02-13T19:50:54.911409996Z" level=info msg="shim disconnected" id=574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51 namespace=k8s.io Feb 13 19:50:54.911593 containerd[1460]: time="2025-02-13T19:50:54.911483001Z" level=warning msg="cleaning up after shim disconnected" id=574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51 namespace=k8s.io Feb 13 19:50:54.911593 containerd[1460]: time="2025-02-13T19:50:54.911495634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:55.103407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51-rootfs.mount: Deactivated successfully. Feb 13 19:50:55.631861 kubelet[2512]: I0213 19:50:55.631825 2512 scope.go:117] "RemoveContainer" containerID="574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51" Feb 13 19:50:55.632485 kubelet[2512]: E0213 19:50:55.631915 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:55.633896 containerd[1460]: time="2025-02-13T19:50:55.633850998Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Feb 13 19:50:55.658870 containerd[1460]: time="2025-02-13T19:50:55.658797578Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3\"" Feb 13 19:50:55.659444 containerd[1460]: time="2025-02-13T19:50:55.659396687Z" level=info msg="StartContainer for \"f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3\"" Feb 13 19:50:55.698798 systemd[1]: Started cri-containerd-f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3.scope - libcontainer container f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3. Feb 13 19:50:55.733831 containerd[1460]: time="2025-02-13T19:50:55.733771855Z" level=info msg="StartContainer for \"f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3\" returns successfully" Feb 13 19:50:55.795071 systemd[1]: cri-containerd-f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3.scope: Deactivated successfully. Feb 13 19:50:55.828041 containerd[1460]: time="2025-02-13T19:50:55.827949586Z" level=info msg="shim disconnected" id=f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3 namespace=k8s.io Feb 13 19:50:55.828041 containerd[1460]: time="2025-02-13T19:50:55.828034944Z" level=warning msg="cleaning up after shim disconnected" id=f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3 namespace=k8s.io Feb 13 19:50:55.828041 containerd[1460]: time="2025-02-13T19:50:55.828050343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:56.103992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3-rootfs.mount: Deactivated successfully. Feb 13 19:50:56.636616 kubelet[2512]: I0213 19:50:56.636574 2512 scope.go:117] "RemoveContainer" containerID="574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51" Feb 13 19:50:56.637188 kubelet[2512]: I0213 19:50:56.636967 2512 scope.go:117] "RemoveContainer" containerID="f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3" Feb 13 19:50:56.637188 kubelet[2512]: E0213 19:50:56.637097 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:56.637273 kubelet[2512]: E0213 19:50:56.637238 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-rpv6b_calico-system(b21d5cce-88cd-478b-b123-4ff08ac7f29e)\"" pod="calico-system/calico-node-rpv6b" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" Feb 13 19:50:56.639078 containerd[1460]: time="2025-02-13T19:50:56.639036899Z" level=info msg="RemoveContainer for \"574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51\"" Feb 13 19:50:56.651901 containerd[1460]: time="2025-02-13T19:50:56.651846621Z" level=info msg="RemoveContainer for \"574c4ca2ae6964774d09fb08d02688ca31caad207d7c7ae000a652f32a023a51\" returns successfully" Feb 13 19:50:56.983902 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:44992.service - OpenSSH per-connection server daemon (10.0.0.1:44992). Feb 13 19:50:57.027274 sshd[3862]: Accepted publickey for core from 10.0.0.1 port 44992 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:50:57.029259 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:57.033947 systemd-logind[1442]: New session 12 of user core. Feb 13 19:50:57.049813 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:50:57.213676 sshd[3862]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:57.217339 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:44992.service: Deactivated successfully. Feb 13 19:50:57.219357 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:50:57.220119 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:50:57.221134 systemd-logind[1442]: Removed session 12. Feb 13 19:50:57.619247 containerd[1460]: time="2025-02-13T19:50:57.619194077Z" level=info msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\"" Feb 13 19:50:57.641633 kubelet[2512]: I0213 19:50:57.641581 2512 scope.go:117] "RemoveContainer" containerID="f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3" Feb 13 19:50:57.642138 kubelet[2512]: E0213 19:50:57.641667 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:57.642138 kubelet[2512]: E0213 19:50:57.641783 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-rpv6b_calico-system(b21d5cce-88cd-478b-b123-4ff08ac7f29e)\"" pod="calico-system/calico-node-rpv6b" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" Feb 13 19:50:57.655697 containerd[1460]: time="2025-02-13T19:50:57.655641956Z" level=error msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\" failed" error="failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:57.656165 kubelet[2512]: E0213 19:50:57.655851 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:50:57.656165 kubelet[2512]: E0213 19:50:57.655933 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434"} Feb 13 19:50:57.656165 kubelet[2512]: E0213 19:50:57.655972 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"737f6164-6397-4855-90ba-00598f17612b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:57.656165 kubelet[2512]: E0213 19:50:57.656000 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"737f6164-6397-4855-90ba-00598f17612b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ht7r8" podUID="737f6164-6397-4855-90ba-00598f17612b" Feb 13 19:50:58.620045 containerd[1460]: time="2025-02-13T19:50:58.619989073Z" level=info msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\"" Feb 13 19:50:58.648008 containerd[1460]: time="2025-02-13T19:50:58.647933902Z" level=error msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\" failed" error="failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:58.648159 kubelet[2512]: E0213 19:50:58.648124 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:50:58.648542 kubelet[2512]: E0213 19:50:58.648169 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6"} Feb 13 19:50:58.648542 kubelet[2512]: E0213 19:50:58.648207 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9521aaa2-a8f7-4df8-a700-3d246b89217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:58.648542 kubelet[2512]: E0213 19:50:58.648235 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9521aaa2-a8f7-4df8-a700-3d246b89217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" podUID="9521aaa2-a8f7-4df8-a700-3d246b89217a" Feb 13 19:51:00.619736 containerd[1460]: time="2025-02-13T19:51:00.619626517Z" level=info msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\"" Feb 13 19:51:00.620203 containerd[1460]: time="2025-02-13T19:51:00.619626537Z" level=info msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\"" Feb 13 19:51:00.645952 containerd[1460]: time="2025-02-13T19:51:00.645885994Z" level=error msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\" failed" error="failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:00.646145 kubelet[2512]: E0213 19:51:00.646093 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:51:00.646512 kubelet[2512]: E0213 19:51:00.646163 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b"} Feb 13 19:51:00.646512 kubelet[2512]: E0213 19:51:00.646205 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:00.646512 kubelet[2512]: E0213 19:51:00.646238 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" podUID="7f04c116-34a0-411d-a3e2-e79b0fc5cc48" Feb 13 19:51:00.646721 containerd[1460]: time="2025-02-13T19:51:00.646689547Z" level=error msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\" failed" error="failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:00.646847 kubelet[2512]: E0213 19:51:00.646823 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:51:00.646894 kubelet[2512]: E0213 19:51:00.646850 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131"} Feb 13 19:51:00.646894 kubelet[2512]: E0213 19:51:00.646876 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91218b03-c787-48a1-bed0-596bf149fa36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:00.646993 kubelet[2512]: E0213 19:51:00.646898 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91218b03-c787-48a1-bed0-596bf149fa36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" podUID="91218b03-c787-48a1-bed0-596bf149fa36" Feb 13 19:51:01.529328 kubelet[2512]: I0213 19:51:01.529287 2512 scope.go:117] "RemoveContainer" containerID="f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3" Feb 13 19:51:01.529475 kubelet[2512]: E0213 19:51:01.529366 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:01.529475 kubelet[2512]: E0213 19:51:01.529465 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-rpv6b_calico-system(b21d5cce-88cd-478b-b123-4ff08ac7f29e)\"" pod="calico-system/calico-node-rpv6b" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" Feb 13 19:51:01.619403 containerd[1460]: time="2025-02-13T19:51:01.619356846Z" level=info msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\"" Feb 13 19:51:01.648104 containerd[1460]: time="2025-02-13T19:51:01.648040879Z" level=error msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\" failed" error="failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:01.648612 kubelet[2512]: E0213 19:51:01.648282 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:51:01.648612 kubelet[2512]: E0213 19:51:01.648340 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135"} Feb 13 19:51:01.648612 kubelet[2512]: E0213 19:51:01.648375 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:01.648612 kubelet[2512]: E0213 19:51:01.648402 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lkr85" podUID="d19124ca-1721-4ff8-b4f5-05a576fbbc55" Feb 13 19:51:02.229545 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:50332.service - OpenSSH per-connection server daemon (10.0.0.1:50332). Feb 13 19:51:02.269787 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 50332 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:02.271476 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:02.275664 systemd-logind[1442]: New session 13 of user core. Feb 13 19:51:02.285679 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:51:02.396277 sshd[3993]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:02.409865 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:50332.service: Deactivated successfully. Feb 13 19:51:02.412009 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:51:02.414167 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:51:02.424819 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:50346.service - OpenSSH per-connection server daemon (10.0.0.1:50346). Feb 13 19:51:02.425925 systemd-logind[1442]: Removed session 13. Feb 13 19:51:02.459912 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 50346 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:02.461682 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:02.466225 systemd-logind[1442]: New session 14 of user core. Feb 13 19:51:02.475715 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:51:02.627381 sshd[4009]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:02.637313 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:50346.service: Deactivated successfully. Feb 13 19:51:02.641955 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:51:02.647931 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:51:02.654031 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:50356.service - OpenSSH per-connection server daemon (10.0.0.1:50356). Feb 13 19:51:02.655079 systemd-logind[1442]: Removed session 14. Feb 13 19:51:02.689736 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 50356 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:02.691670 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:02.697010 systemd-logind[1442]: New session 15 of user core. Feb 13 19:51:02.708794 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:51:02.825212 sshd[4021]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:02.829592 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:50356.service: Deactivated successfully. Feb 13 19:51:02.832360 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:51:02.834440 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:51:02.835756 systemd-logind[1442]: Removed session 15. Feb 13 19:51:03.619549 containerd[1460]: time="2025-02-13T19:51:03.619340723Z" level=info msg="StopPodSandbox for \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\"" Feb 13 19:51:03.645485 containerd[1460]: time="2025-02-13T19:51:03.645424016Z" level=error msg="StopPodSandbox for \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\" failed" error="failed to destroy network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:03.645771 kubelet[2512]: E0213 19:51:03.645711 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Feb 13 19:51:03.646102 kubelet[2512]: E0213 19:51:03.645785 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3"} Feb 13 19:51:03.646102 kubelet[2512]: E0213 19:51:03.645844 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:03.646102 kubelet[2512]: E0213 19:51:03.645870 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:51:07.841040 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:50358.service - OpenSSH per-connection server daemon (10.0.0.1:50358). Feb 13 19:51:07.879739 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 50358 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:07.881574 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:07.886744 systemd-logind[1442]: New session 16 of user core. Feb 13 19:51:07.893651 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:51:08.005615 sshd[4060]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:08.010118 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:50358.service: Deactivated successfully. Feb 13 19:51:08.012254 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:51:08.013293 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:51:08.014455 systemd-logind[1442]: Removed session 16. Feb 13 19:51:10.619470 containerd[1460]: time="2025-02-13T19:51:10.619089944Z" level=info msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\"" Feb 13 19:51:10.646986 containerd[1460]: time="2025-02-13T19:51:10.646928260Z" level=error msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\" failed" error="failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:10.647236 kubelet[2512]: E0213 19:51:10.647179 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:51:10.647645 kubelet[2512]: E0213 19:51:10.647260 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434"} Feb 13 19:51:10.647645 kubelet[2512]: E0213 19:51:10.647300 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"737f6164-6397-4855-90ba-00598f17612b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:10.647645 kubelet[2512]: E0213 19:51:10.647335 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"737f6164-6397-4855-90ba-00598f17612b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ht7r8" podUID="737f6164-6397-4855-90ba-00598f17612b" Feb 13 19:51:11.619604 containerd[1460]: time="2025-02-13T19:51:11.619492117Z" level=info msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\"" Feb 13 19:51:11.650863 containerd[1460]: time="2025-02-13T19:51:11.650795225Z" level=error msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\" failed" error="failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:11.651114 kubelet[2512]: E0213 19:51:11.651044 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:51:11.651461 kubelet[2512]: E0213 19:51:11.651112 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6"} Feb 13 19:51:11.651461 kubelet[2512]: E0213 19:51:11.651148 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9521aaa2-a8f7-4df8-a700-3d246b89217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:11.651461 kubelet[2512]: E0213 19:51:11.651171 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9521aaa2-a8f7-4df8-a700-3d246b89217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" podUID="9521aaa2-a8f7-4df8-a700-3d246b89217a" Feb 13 19:51:12.621697 containerd[1460]: time="2025-02-13T19:51:12.621656165Z" level=info msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\"" Feb 13 19:51:12.651743 containerd[1460]: time="2025-02-13T19:51:12.651684882Z" level=error msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\" failed" error="failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:12.651997 kubelet[2512]: E0213 19:51:12.651935 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:51:12.652422 kubelet[2512]: E0213 19:51:12.652009 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131"} Feb 13 19:51:12.652422 kubelet[2512]: E0213 19:51:12.652045 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91218b03-c787-48a1-bed0-596bf149fa36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:12.652422 kubelet[2512]: E0213 19:51:12.652070 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91218b03-c787-48a1-bed0-596bf149fa36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" podUID="91218b03-c787-48a1-bed0-596bf149fa36" Feb 13 19:51:13.017758 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:44364.service - OpenSSH per-connection server daemon (10.0.0.1:44364). Feb 13 19:51:13.053541 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 44364 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:13.055110 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:13.059223 systemd-logind[1442]: New session 17 of user core. Feb 13 19:51:13.068666 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:51:13.191278 sshd[4144]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:13.195606 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:44364.service: Deactivated successfully. Feb 13 19:51:13.197629 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:51:13.198358 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:51:13.199196 systemd-logind[1442]: Removed session 17. Feb 13 19:51:13.619121 containerd[1460]: time="2025-02-13T19:51:13.619057889Z" level=info msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\"" Feb 13 19:51:13.650845 containerd[1460]: time="2025-02-13T19:51:13.650776718Z" level=error msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\" failed" error="failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:13.651303 kubelet[2512]: E0213 19:51:13.651043 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:51:13.651303 kubelet[2512]: E0213 19:51:13.651102 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135"} Feb 13 19:51:13.651303 kubelet[2512]: E0213 19:51:13.651137 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:13.651303 kubelet[2512]: E0213 19:51:13.651160 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lkr85" podUID="d19124ca-1721-4ff8-b4f5-05a576fbbc55" Feb 13 19:51:14.619345 containerd[1460]: time="2025-02-13T19:51:14.619216925Z" level=info msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\"" Feb 13 19:51:14.650342 containerd[1460]: time="2025-02-13T19:51:14.650268592Z" level=error msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\" failed" error="failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:14.650592 kubelet[2512]: E0213 19:51:14.650536 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:51:14.651078 kubelet[2512]: E0213 19:51:14.650597 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b"} Feb 13 19:51:14.651078 kubelet[2512]: E0213 19:51:14.650633 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:14.651078 kubelet[2512]: E0213 19:51:14.650659 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" podUID="7f04c116-34a0-411d-a3e2-e79b0fc5cc48" Feb 13 19:51:15.507660 kubelet[2512]: I0213 19:51:15.507604 2512 scope.go:117] "RemoveContainer" containerID="f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3" Feb 13 19:51:15.507840 kubelet[2512]: E0213 19:51:15.507695 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:15.511089 containerd[1460]: time="2025-02-13T19:51:15.511034680Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Feb 13 19:51:15.526782 containerd[1460]: time="2025-02-13T19:51:15.526715278Z" level=info msg="CreateContainer within sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\"" Feb 13 19:51:15.527411 containerd[1460]: time="2025-02-13T19:51:15.527346679Z" level=info msg="StartContainer for \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\"" Feb 13 19:51:15.571712 systemd[1]: Started cri-containerd-74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23.scope - libcontainer container 74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23. Feb 13 19:51:15.610639 containerd[1460]: time="2025-02-13T19:51:15.610588300Z" level=info msg="StartContainer for \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\" returns successfully" Feb 13 19:51:15.667409 systemd[1]: cri-containerd-74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23.scope: Deactivated successfully. Feb 13 19:51:15.690127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23-rootfs.mount: Deactivated successfully. Feb 13 19:51:15.692567 kubelet[2512]: E0213 19:51:15.692010 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:15.694479 containerd[1460]: time="2025-02-13T19:51:15.694291615Z" level=info msg="shim disconnected" id=74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23 namespace=k8s.io Feb 13 19:51:15.694479 containerd[1460]: time="2025-02-13T19:51:15.694335099Z" level=warning msg="cleaning up after shim disconnected" id=74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23 namespace=k8s.io Feb 13 19:51:15.694479 containerd[1460]: time="2025-02-13T19:51:15.694343325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:15.707056 containerd[1460]: time="2025-02-13T19:51:15.706103097Z" level=error msg="ExecSync for \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\" failed" error="failed to exec in container: failed to create exec \"ccf9ff840fcd0dcaeac518db3f77608e0895bc7119bdf0df3e990a64dd85033e\": ttrpc: closed: unknown" Feb 13 19:51:15.708107 kubelet[2512]: E0213 19:51:15.707895 2512 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"ccf9ff840fcd0dcaeac518db3f77608e0895bc7119bdf0df3e990a64dd85033e\": ttrpc: closed: unknown" containerID="74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 19:51:15.709005 containerd[1460]: time="2025-02-13T19:51:15.708959253Z" level=error msg="ExecSync for \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23 not found: not found" Feb 13 19:51:15.709113 kubelet[2512]: E0213 19:51:15.709080 2512 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23 not found: not found" containerID="74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 19:51:15.709670 kubelet[2512]: I0213 19:51:15.709612 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rpv6b" podStartSLOduration=22.085759558 podStartE2EDuration="49.708626149s" podCreationTimestamp="2025-02-13 19:50:26 +0000 UTC" firstStartedPulling="2025-02-13 19:50:26.474704445 +0000 UTC m=+11.956308917" lastFinishedPulling="2025-02-13 19:50:54.097571046 +0000 UTC m=+39.579175508" observedRunningTime="2025-02-13 19:51:15.708082697 +0000 UTC m=+61.189687149" watchObservedRunningTime="2025-02-13 19:51:15.708626149 +0000 UTC m=+61.190230611" Feb 13 19:51:15.709855 containerd[1460]: time="2025-02-13T19:51:15.709826531Z" level=error msg="ExecSync for \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23 not found: not found" Feb 13 19:51:15.709947 kubelet[2512]: E0213 19:51:15.709920 2512 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23 not found: not found" containerID="74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 19:51:16.696351 kubelet[2512]: I0213 19:51:16.696314 2512 scope.go:117] "RemoveContainer" containerID="f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3" Feb 13 19:51:16.696961 kubelet[2512]: I0213 19:51:16.696647 2512 scope.go:117] "RemoveContainer" containerID="74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23" Feb 13 19:51:16.696961 kubelet[2512]: E0213 19:51:16.696725 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:16.696961 kubelet[2512]: E0213 19:51:16.696814 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-rpv6b_calico-system(b21d5cce-88cd-478b-b123-4ff08ac7f29e)\"" pod="calico-system/calico-node-rpv6b" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" Feb 13 19:51:16.698012 containerd[1460]: time="2025-02-13T19:51:16.697956713Z" level=info msg="RemoveContainer for \"f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3\"" Feb 13 19:51:16.779755 containerd[1460]: time="2025-02-13T19:51:16.779684891Z" level=info msg="RemoveContainer for \"f0158259b1bccd117391a6e760531904ddf47cf637b0b0de2608e8bed140d6e3\" returns successfully" Feb 13 19:51:17.701598 kubelet[2512]: I0213 19:51:17.701567 2512 scope.go:117] "RemoveContainer" containerID="74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23" Feb 13 19:51:17.702096 kubelet[2512]: E0213 19:51:17.701640 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:17.702096 kubelet[2512]: E0213 19:51:17.701740 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-rpv6b_calico-system(b21d5cce-88cd-478b-b123-4ff08ac7f29e)\"" pod="calico-system/calico-node-rpv6b" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" Feb 13 19:51:18.208234 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:44368.service - OpenSSH per-connection server daemon (10.0.0.1:44368). Feb 13 19:51:18.248547 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 44368 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:18.250570 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:18.254875 systemd-logind[1442]: New session 18 of user core. Feb 13 19:51:18.262672 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:51:18.437238 sshd[4271]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:18.442651 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:44368.service: Deactivated successfully. Feb 13 19:51:18.445598 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:51:18.446367 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:51:18.447477 systemd-logind[1442]: Removed session 18. Feb 13 19:51:18.620671 containerd[1460]: time="2025-02-13T19:51:18.620257959Z" level=info msg="StopPodSandbox for \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\"" Feb 13 19:51:18.654680 containerd[1460]: time="2025-02-13T19:51:18.654606468Z" level=error msg="StopPodSandbox for \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\" failed" error="failed to destroy network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:18.654947 kubelet[2512]: E0213 19:51:18.654872 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Feb 13 19:51:18.655027 kubelet[2512]: E0213 19:51:18.654943 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3"} Feb 13 19:51:18.655027 kubelet[2512]: E0213 19:51:18.654980 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:18.655027 kubelet[2512]: E0213 19:51:18.655005 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g7d74" podUID="171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd" Feb 13 19:51:22.620103 containerd[1460]: time="2025-02-13T19:51:22.619210781Z" level=info msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\"" Feb 13 19:51:22.620103 containerd[1460]: time="2025-02-13T19:51:22.619220961Z" level=info msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\"" Feb 13 19:51:22.652687 containerd[1460]: time="2025-02-13T19:51:22.652629724Z" level=error msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\" failed" error="failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:22.652915 kubelet[2512]: E0213 19:51:22.652859 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:51:22.653342 kubelet[2512]: E0213 19:51:22.652927 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434"} Feb 13 19:51:22.653342 kubelet[2512]: E0213 19:51:22.652964 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"737f6164-6397-4855-90ba-00598f17612b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:22.653342 kubelet[2512]: E0213 19:51:22.652991 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"737f6164-6397-4855-90ba-00598f17612b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ht7r8" podUID="737f6164-6397-4855-90ba-00598f17612b" Feb 13 19:51:22.657572 containerd[1460]: time="2025-02-13T19:51:22.657493062Z" level=error msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\" failed" error="failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:22.657805 kubelet[2512]: E0213 19:51:22.657756 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:51:22.657877 kubelet[2512]: E0213 19:51:22.657812 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6"} Feb 13 19:51:22.657877 kubelet[2512]: E0213 19:51:22.657854 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9521aaa2-a8f7-4df8-a700-3d246b89217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:22.657982 kubelet[2512]: E0213 19:51:22.657882 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9521aaa2-a8f7-4df8-a700-3d246b89217a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" podUID="9521aaa2-a8f7-4df8-a700-3d246b89217a" Feb 13 19:51:23.455080 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:49588.service - OpenSSH per-connection server daemon (10.0.0.1:49588). Feb 13 19:51:23.506038 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 49588 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:23.507986 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:23.513321 systemd-logind[1442]: New session 19 of user core. Feb 13 19:51:23.521680 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:51:23.653732 sshd[4359]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:23.657643 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:49588.service: Deactivated successfully. Feb 13 19:51:23.659370 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:51:23.660095 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:51:23.661278 systemd-logind[1442]: Removed session 19. Feb 13 19:51:24.619423 containerd[1460]: time="2025-02-13T19:51:24.619326968Z" level=info msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\"" Feb 13 19:51:24.647540 containerd[1460]: time="2025-02-13T19:51:24.647451130Z" level=error msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\" failed" error="failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:24.647793 kubelet[2512]: E0213 19:51:24.647734 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:51:24.648188 kubelet[2512]: E0213 19:51:24.647800 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131"} Feb 13 19:51:24.648188 kubelet[2512]: E0213 19:51:24.647836 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91218b03-c787-48a1-bed0-596bf149fa36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:24.648188 kubelet[2512]: E0213 19:51:24.647861 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91218b03-c787-48a1-bed0-596bf149fa36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" podUID="91218b03-c787-48a1-bed0-596bf149fa36" Feb 13 19:51:26.619924 containerd[1460]: time="2025-02-13T19:51:26.619775452Z" level=info msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\"" Feb 13 19:51:26.648955 containerd[1460]: time="2025-02-13T19:51:26.648869106Z" level=error msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\" failed" error="failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:26.649179 kubelet[2512]: E0213 19:51:26.649121 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:51:26.649589 kubelet[2512]: E0213 19:51:26.649199 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135"} Feb 13 19:51:26.649589 kubelet[2512]: E0213 19:51:26.649246 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:26.649589 kubelet[2512]: E0213 19:51:26.649280 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d19124ca-1721-4ff8-b4f5-05a576fbbc55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lkr85" podUID="d19124ca-1721-4ff8-b4f5-05a576fbbc55" Feb 13 19:51:26.680320 containerd[1460]: time="2025-02-13T19:51:26.680166608Z" level=info msg="StopPodSandbox for \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\"" Feb 13 19:51:26.688181 containerd[1460]: time="2025-02-13T19:51:26.688092588Z" level=info msg="Container to stop \"4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:26.688181 containerd[1460]: time="2025-02-13T19:51:26.688159988Z" level=info msg="Container to stop \"78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:26.688181 containerd[1460]: time="2025-02-13T19:51:26.688174456Z" level=info msg="Container to stop \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:26.691050 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7-shm.mount: Deactivated successfully. Feb 13 19:51:26.695780 systemd[1]: cri-containerd-c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7.scope: Deactivated successfully. Feb 13 19:51:26.718198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7-rootfs.mount: Deactivated successfully. Feb 13 19:51:26.766875 containerd[1460]: time="2025-02-13T19:51:26.766671128Z" level=info msg="shim disconnected" id=c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7 namespace=k8s.io Feb 13 19:51:26.766875 containerd[1460]: time="2025-02-13T19:51:26.766732315Z" level=warning msg="cleaning up after shim disconnected" id=c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7 namespace=k8s.io Feb 13 19:51:26.766875 containerd[1460]: time="2025-02-13T19:51:26.766745471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:26.789655 containerd[1460]: time="2025-02-13T19:51:26.789596384Z" level=info msg="TearDown network for sandbox \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" successfully" Feb 13 19:51:26.789655 containerd[1460]: time="2025-02-13T19:51:26.789639266Z" level=info msg="StopPodSandbox for \"c9cc1c98ba7646c3a2596c459051b58d7709776e428e7aad979c93a006d039a7\" returns successfully" Feb 13 19:51:26.836724 kubelet[2512]: I0213 19:51:26.836670 2512 memory_manager.go:355] "RemoveStaleState removing state" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" containerName="calico-node" Feb 13 19:51:26.836724 kubelet[2512]: I0213 19:51:26.836705 2512 memory_manager.go:355] "RemoveStaleState removing state" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" containerName="calico-node" Feb 13 19:51:26.837008 kubelet[2512]: I0213 19:51:26.836746 2512 memory_manager.go:355] "RemoveStaleState removing state" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" containerName="calico-node" Feb 13 19:51:26.858120 systemd[1]: Created slice kubepods-besteffort-pod1052668e_4829_4a2d_8533_5a8337ad5b12.slice - libcontainer container kubepods-besteffort-pod1052668e_4829_4a2d_8533_5a8337ad5b12.slice. Feb 13 19:51:26.903888 kubelet[2512]: I0213 19:51:26.902987 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-policysync\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.903888 kubelet[2512]: I0213 19:51:26.903031 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-flexvol-driver-host\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.903888 kubelet[2512]: I0213 19:51:26.903088 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b21d5cce-88cd-478b-b123-4ff08ac7f29e-node-certs\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.903888 kubelet[2512]: I0213 19:51:26.903105 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-lib-modules\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.903888 kubelet[2512]: I0213 19:51:26.903121 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-net-dir\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.903888 kubelet[2512]: I0213 19:51:26.903139 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-bin-dir\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.904193 kubelet[2512]: I0213 19:51:26.903136 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-policysync" (OuterVolumeSpecName: "policysync") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.904193 kubelet[2512]: I0213 19:51:26.903179 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.904193 kubelet[2512]: I0213 19:51:26.903155 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-log-dir\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.904193 kubelet[2512]: I0213 19:51:26.903202 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.904193 kubelet[2512]: I0213 19:51:26.903207 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-xtables-lock\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.904361 kubelet[2512]: I0213 19:51:26.903198 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.904361 kubelet[2512]: I0213 19:51:26.903244 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.904361 kubelet[2512]: I0213 19:51:26.903220 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.904361 kubelet[2512]: I0213 19:51:26.903253 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-var-run-calico\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.904361 kubelet[2512]: I0213 19:51:26.903330 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22x7h\" (UniqueName: \"kubernetes.io/projected/b21d5cce-88cd-478b-b123-4ff08ac7f29e-kube-api-access-22x7h\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.904585 kubelet[2512]: I0213 19:51:26.903374 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21d5cce-88cd-478b-b123-4ff08ac7f29e-tigera-ca-bundle\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.904585 kubelet[2512]: I0213 19:51:26.903393 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-var-lib-calico\") pod \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\" (UID: \"b21d5cce-88cd-478b-b123-4ff08ac7f29e\") " Feb 13 19:51:26.904585 kubelet[2512]: I0213 19:51:26.903478 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-cni-log-dir\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.904585 kubelet[2512]: I0213 19:51:26.903500 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-policysync\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.904585 kubelet[2512]: I0213 19:51:26.903539 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-var-lib-calico\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.904585 kubelet[2512]: I0213 19:51:26.903555 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-cni-net-dir\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907375 kubelet[2512]: I0213 19:51:26.903576 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-cni-bin-dir\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907375 kubelet[2512]: I0213 19:51:26.903592 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1052668e-4829-4a2d-8533-5a8337ad5b12-tigera-ca-bundle\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907375 kubelet[2512]: I0213 19:51:26.903610 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1052668e-4829-4a2d-8533-5a8337ad5b12-node-certs\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907375 kubelet[2512]: I0213 19:51:26.903630 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-lib-modules\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907375 kubelet[2512]: I0213 19:51:26.903650 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-xtables-lock\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907748 kubelet[2512]: I0213 19:51:26.903672 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-var-run-calico\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907748 kubelet[2512]: I0213 19:51:26.903692 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx92g\" (UniqueName: \"kubernetes.io/projected/1052668e-4829-4a2d-8533-5a8337ad5b12-kube-api-access-zx92g\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907748 kubelet[2512]: I0213 19:51:26.903713 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1052668e-4829-4a2d-8533-5a8337ad5b12-flexvol-driver-host\") pod \"calico-node-tlnkq\" (UID: \"1052668e-4829-4a2d-8533-5a8337ad5b12\") " pod="calico-system/calico-node-tlnkq" Feb 13 19:51:26.907748 kubelet[2512]: I0213 19:51:26.903736 2512 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-policysync\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:26.907748 kubelet[2512]: I0213 19:51:26.903745 2512 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:26.907748 kubelet[2512]: I0213 19:51:26.903758 2512 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:26.907748 kubelet[2512]: I0213 19:51:26.903767 2512 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:26.907975 kubelet[2512]: I0213 19:51:26.903776 2512 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:26.907975 kubelet[2512]: I0213 19:51:26.903787 2512 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:26.907975 kubelet[2512]: I0213 19:51:26.903269 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.907975 kubelet[2512]: I0213 19:51:26.903277 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.907975 kubelet[2512]: I0213 19:51:26.903590 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:26.909645 kubelet[2512]: I0213 19:51:26.909599 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21d5cce-88cd-478b-b123-4ff08ac7f29e-node-certs" (OuterVolumeSpecName: "node-certs") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:51:26.909947 kubelet[2512]: I0213 19:51:26.909921 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21d5cce-88cd-478b-b123-4ff08ac7f29e-kube-api-access-22x7h" (OuterVolumeSpecName: "kube-api-access-22x7h") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "kube-api-access-22x7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:51:26.911055 systemd[1]: var-lib-kubelet-pods-b21d5cce\x2d88cd\x2d478b\x2db123\x2d4ff08ac7f29e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d22x7h.mount: Deactivated successfully. Feb 13 19:51:26.912005 kubelet[2512]: I0213 19:51:26.911933 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21d5cce-88cd-478b-b123-4ff08ac7f29e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b21d5cce-88cd-478b-b123-4ff08ac7f29e" (UID: "b21d5cce-88cd-478b-b123-4ff08ac7f29e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:51:26.914486 systemd[1]: var-lib-kubelet-pods-b21d5cce\x2d88cd\x2d478b\x2db123\x2d4ff08ac7f29e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Feb 13 19:51:26.914624 systemd[1]: var-lib-kubelet-pods-b21d5cce\x2d88cd\x2d478b\x2db123\x2d4ff08ac7f29e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 13 19:51:27.004486 kubelet[2512]: I0213 19:51:27.004403 2512 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:27.004486 kubelet[2512]: I0213 19:51:27.004460 2512 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21d5cce-88cd-478b-b123-4ff08ac7f29e-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:27.004486 kubelet[2512]: I0213 19:51:27.004476 2512 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b21d5cce-88cd-478b-b123-4ff08ac7f29e-node-certs\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:27.004486 kubelet[2512]: I0213 19:51:27.004488 2512 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:27.004486 kubelet[2512]: I0213 19:51:27.004499 2512 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b21d5cce-88cd-478b-b123-4ff08ac7f29e-var-run-calico\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:27.004943 kubelet[2512]: I0213 19:51:27.004511 2512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-22x7h\" (UniqueName: \"kubernetes.io/projected/b21d5cce-88cd-478b-b123-4ff08ac7f29e-kube-api-access-22x7h\") on node \"localhost\" DevicePath \"\"" Feb 13 19:51:27.161648 kubelet[2512]: E0213 19:51:27.161426 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:27.162836 containerd[1460]: time="2025-02-13T19:51:27.162256576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tlnkq,Uid:1052668e-4829-4a2d-8533-5a8337ad5b12,Namespace:calico-system,Attempt:0,}" Feb 13 19:51:27.193256 containerd[1460]: time="2025-02-13T19:51:27.192133517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:27.193256 containerd[1460]: time="2025-02-13T19:51:27.192215474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:27.193256 containerd[1460]: time="2025-02-13T19:51:27.192235393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:27.193256 containerd[1460]: time="2025-02-13T19:51:27.192476855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:27.235867 systemd[1]: Started cri-containerd-dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538.scope - libcontainer container dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538. Feb 13 19:51:27.265749 containerd[1460]: time="2025-02-13T19:51:27.265675705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tlnkq,Uid:1052668e-4829-4a2d-8533-5a8337ad5b12,Namespace:calico-system,Attempt:0,} returns sandbox id \"dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538\"" Feb 13 19:51:27.266703 kubelet[2512]: E0213 19:51:27.266663 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:27.269159 containerd[1460]: time="2025-02-13T19:51:27.269123718Z" level=info msg="CreateContainer within sandbox \"dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:51:27.290046 containerd[1460]: time="2025-02-13T19:51:27.289977975Z" level=info msg="CreateContainer within sandbox \"dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b8b8ff29eeb3317299d3a1d2ca849a2db55f897ffbf98a4be558efd542dbdf47\"" Feb 13 19:51:27.290926 containerd[1460]: time="2025-02-13T19:51:27.290896176Z" level=info msg="StartContainer for \"b8b8ff29eeb3317299d3a1d2ca849a2db55f897ffbf98a4be558efd542dbdf47\"" Feb 13 19:51:27.331812 systemd[1]: Started cri-containerd-b8b8ff29eeb3317299d3a1d2ca849a2db55f897ffbf98a4be558efd542dbdf47.scope - libcontainer container b8b8ff29eeb3317299d3a1d2ca849a2db55f897ffbf98a4be558efd542dbdf47. Feb 13 19:51:27.406780 containerd[1460]: time="2025-02-13T19:51:27.406715599Z" level=info msg="StartContainer for \"b8b8ff29eeb3317299d3a1d2ca849a2db55f897ffbf98a4be558efd542dbdf47\" returns successfully" Feb 13 19:51:27.532715 systemd[1]: cri-containerd-b8b8ff29eeb3317299d3a1d2ca849a2db55f897ffbf98a4be558efd542dbdf47.scope: Deactivated successfully. Feb 13 19:51:27.575368 containerd[1460]: time="2025-02-13T19:51:27.575297256Z" level=info msg="shim disconnected" id=b8b8ff29eeb3317299d3a1d2ca849a2db55f897ffbf98a4be558efd542dbdf47 namespace=k8s.io Feb 13 19:51:27.575368 containerd[1460]: time="2025-02-13T19:51:27.575361740Z" level=warning msg="cleaning up after shim disconnected" id=b8b8ff29eeb3317299d3a1d2ca849a2db55f897ffbf98a4be558efd542dbdf47 namespace=k8s.io Feb 13 19:51:27.575368 containerd[1460]: time="2025-02-13T19:51:27.575374934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:27.590119 containerd[1460]: time="2025-02-13T19:51:27.590014312Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:51:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:51:27.722908 kubelet[2512]: E0213 19:51:27.722861 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:27.725781 containerd[1460]: time="2025-02-13T19:51:27.725707831Z" level=info msg="CreateContainer within sandbox \"dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:51:27.726187 kubelet[2512]: I0213 19:51:27.726108 2512 scope.go:117] "RemoveContainer" containerID="74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23" Feb 13 19:51:27.737187 systemd[1]: Removed slice kubepods-besteffort-podb21d5cce_88cd_478b_b123_4ff08ac7f29e.slice - libcontainer container kubepods-besteffort-podb21d5cce_88cd_478b_b123_4ff08ac7f29e.slice. Feb 13 19:51:27.738379 containerd[1460]: time="2025-02-13T19:51:27.738115168Z" level=info msg="RemoveContainer for \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\"" Feb 13 19:51:27.752702 containerd[1460]: time="2025-02-13T19:51:27.752633793Z" level=info msg="RemoveContainer for \"74ddc43d3aea60af70523f2bbc0dfaea6fe6a859c4c49d58648cd4f218f07e23\" returns successfully" Feb 13 19:51:27.753011 kubelet[2512]: I0213 19:51:27.752980 2512 scope.go:117] "RemoveContainer" containerID="78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313" Feb 13 19:51:27.754171 containerd[1460]: time="2025-02-13T19:51:27.754129422Z" level=info msg="RemoveContainer for \"78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313\"" Feb 13 19:51:27.761094 containerd[1460]: time="2025-02-13T19:51:27.761031840Z" level=info msg="CreateContainer within sandbox \"dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069\"" Feb 13 19:51:27.763389 containerd[1460]: time="2025-02-13T19:51:27.762308589Z" level=info msg="RemoveContainer for \"78d864c6c922f63b3378b089e460a676ea7c845a67ed622c6a97ce5ad9b75313\" returns successfully" Feb 13 19:51:27.763389 containerd[1460]: time="2025-02-13T19:51:27.762503793Z" level=info msg="StartContainer for \"419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069\"" Feb 13 19:51:27.763512 kubelet[2512]: I0213 19:51:27.762636 2512 scope.go:117] "RemoveContainer" containerID="4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0" Feb 13 19:51:27.769304 containerd[1460]: time="2025-02-13T19:51:27.769255162Z" level=info msg="RemoveContainer for \"4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0\"" Feb 13 19:51:27.776200 containerd[1460]: time="2025-02-13T19:51:27.775997654Z" level=info msg="RemoveContainer for \"4f946a7b5d5eee32a77d844d901d358493ad92c0ecb88f2c0310b7d625cbb1b0\" returns successfully" Feb 13 19:51:27.810030 systemd[1]: Started cri-containerd-419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069.scope - libcontainer container 419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069. Feb 13 19:51:27.848566 containerd[1460]: time="2025-02-13T19:51:27.848422529Z" level=info msg="StartContainer for \"419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069\" returns successfully" Feb 13 19:51:28.578154 systemd[1]: cri-containerd-419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069.scope: Deactivated successfully. Feb 13 19:51:28.610566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069-rootfs.mount: Deactivated successfully. Feb 13 19:51:28.620590 containerd[1460]: time="2025-02-13T19:51:28.620498319Z" level=info msg="shim disconnected" id=419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069 namespace=k8s.io Feb 13 19:51:28.620590 containerd[1460]: time="2025-02-13T19:51:28.620580315Z" level=warning msg="cleaning up after shim disconnected" id=419fa158543a011df4bd9e9f3df32fa9a6c34e8c8017ac3d2d1ed1839c5ae069 namespace=k8s.io Feb 13 19:51:28.621014 containerd[1460]: time="2025-02-13T19:51:28.620592760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:28.621513 containerd[1460]: time="2025-02-13T19:51:28.620570357Z" level=info msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\"" Feb 13 19:51:28.626591 kubelet[2512]: I0213 19:51:28.626544 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21d5cce-88cd-478b-b123-4ff08ac7f29e" path="/var/lib/kubelet/pods/b21d5cce-88cd-478b-b123-4ff08ac7f29e/volumes" Feb 13 19:51:28.660055 containerd[1460]: time="2025-02-13T19:51:28.659970668Z" level=error msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\" failed" error="failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:28.660318 kubelet[2512]: E0213 19:51:28.660270 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:51:28.660399 kubelet[2512]: E0213 19:51:28.660337 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b"} Feb 13 19:51:28.660399 kubelet[2512]: E0213 19:51:28.660381 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:51:28.660573 kubelet[2512]: E0213 19:51:28.660409 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f04c116-34a0-411d-a3e2-e79b0fc5cc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" podUID="7f04c116-34a0-411d-a3e2-e79b0fc5cc48" Feb 13 19:51:28.671172 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:49596.service - OpenSSH per-connection server daemon (10.0.0.1:49596). Feb 13 19:51:28.722379 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 49596 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:28.724946 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:28.731424 systemd-logind[1442]: New session 20 of user core. Feb 13 19:51:28.737872 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:51:28.759368 kubelet[2512]: E0213 19:51:28.759318 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:28.776946 containerd[1460]: time="2025-02-13T19:51:28.773614800Z" level=info msg="CreateContainer within sandbox \"dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:51:28.920029 containerd[1460]: time="2025-02-13T19:51:28.919794846Z" level=info msg="CreateContainer within sandbox \"dad0eb4e92d27010135786982d8902cacf9d154d4c26120238bc728bdb226538\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"973992bb38f632c90b7ff173a1ff9b9d394e61c5698936c68cc26381dc88150b\"" Feb 13 19:51:28.921033 containerd[1460]: time="2025-02-13T19:51:28.920970751Z" level=info msg="StartContainer for \"973992bb38f632c90b7ff173a1ff9b9d394e61c5698936c68cc26381dc88150b\"" Feb 13 19:51:28.935948 sshd[4645]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:28.946427 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:49596.service: Deactivated successfully. Feb 13 19:51:28.948879 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:51:28.951550 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:51:28.953099 systemd-logind[1442]: Removed session 20. Feb 13 19:51:28.969849 systemd[1]: Started cri-containerd-973992bb38f632c90b7ff173a1ff9b9d394e61c5698936c68cc26381dc88150b.scope - libcontainer container 973992bb38f632c90b7ff173a1ff9b9d394e61c5698936c68cc26381dc88150b. Feb 13 19:51:29.033296 containerd[1460]: time="2025-02-13T19:51:29.033222057Z" level=info msg="StartContainer for \"973992bb38f632c90b7ff173a1ff9b9d394e61c5698936c68cc26381dc88150b\" returns successfully" Feb 13 19:51:29.768826 kubelet[2512]: E0213 19:51:29.768787 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:29.791685 kubelet[2512]: I0213 19:51:29.791608 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tlnkq" podStartSLOduration=3.791581288 podStartE2EDuration="3.791581288s" podCreationTimestamp="2025-02-13 19:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:29.791426893 +0000 UTC m=+75.273031355" watchObservedRunningTime="2025-02-13 19:51:29.791581288 +0000 UTC m=+75.273185771" Feb 13 19:51:30.622289 containerd[1460]: time="2025-02-13T19:51:30.622235336Z" level=info msg="StopPodSandbox for \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\"" Feb 13 19:51:30.774397 kubelet[2512]: E0213 19:51:30.773781 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:30.814712 kernel: bpftool[4909]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.752 [INFO][4854] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.752 [INFO][4854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" iface="eth0" netns="/var/run/netns/cni-2005775c-3c87-bd6c-e4ec-d12ebfd7e8dc" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.752 [INFO][4854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" iface="eth0" netns="/var/run/netns/cni-2005775c-3c87-bd6c-e4ec-d12ebfd7e8dc" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.754 [INFO][4854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" iface="eth0" netns="/var/run/netns/cni-2005775c-3c87-bd6c-e4ec-d12ebfd7e8dc" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.755 [INFO][4854] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.755 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.808 [INFO][4884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" HandleID="k8s-pod-network.cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Workload="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.808 [INFO][4884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.808 [INFO][4884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.820 [WARNING][4884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" HandleID="k8s-pod-network.cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Workload="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.820 [INFO][4884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" HandleID="k8s-pod-network.cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Workload="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.822 [INFO][4884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:30.830655 containerd[1460]: 2025-02-13 19:51:30.826 [INFO][4854] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3" Feb 13 19:51:30.832024 containerd[1460]: time="2025-02-13T19:51:30.831882172Z" level=info msg="TearDown network for sandbox \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\" successfully" Feb 13 19:51:30.832024 containerd[1460]: time="2025-02-13T19:51:30.831919383Z" level=info msg="StopPodSandbox for \"cca171a60f9db900bb64ff6c854cc254a6df3e70351f87c6e13d0d785443a7b3\" returns successfully" Feb 13 19:51:30.836273 containerd[1460]: time="2025-02-13T19:51:30.836197334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7d74,Uid:171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd,Namespace:calico-system,Attempt:1,}" Feb 13 19:51:30.836667 systemd[1]: run-netns-cni\x2d2005775c\x2d3c87\x2dbd6c\x2de4ec\x2dd12ebfd7e8dc.mount: Deactivated successfully. Feb 13 19:51:31.126664 systemd-networkd[1400]: cali2e8606e1745: Link UP Feb 13 19:51:31.126939 systemd-networkd[1400]: cali2e8606e1745: Gained carrier Feb 13 19:51:31.158035 systemd-networkd[1400]: vxlan.calico: Link UP Feb 13 19:51:31.158049 systemd-networkd[1400]: vxlan.calico: Gained carrier Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:30.940 [INFO][4922] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g7d74-eth0 csi-node-driver- calico-system 171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd 1127 0 2025-02-13 19:50:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-g7d74 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2e8606e1745 [] []}} ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Namespace="calico-system" Pod="csi-node-driver-g7d74" WorkloadEndpoint="localhost-k8s-csi--node--driver--g7d74-" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:30.940 [INFO][4922] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Namespace="calico-system" Pod="csi-node-driver-g7d74" WorkloadEndpoint="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:30.982 [INFO][4936] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" HandleID="k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Workload="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:30.995 [INFO][4936] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" HandleID="k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Workload="localhost-k8s-csi--node--driver--g7d74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002940d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g7d74", "timestamp":"2025-02-13 19:51:30.982252575 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:30.995 [INFO][4936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:30.995 [INFO][4936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:30.995 [INFO][4936] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:30.997 [INFO][4936] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.002 [INFO][4936] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.008 [INFO][4936] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.010 [INFO][4936] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.013 [INFO][4936] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.013 [INFO][4936] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.015 [INFO][4936] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3 Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.064 [INFO][4936] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.111 [INFO][4936] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.111 [INFO][4936] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" host="localhost" Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.111 [INFO][4936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:31.295484 containerd[1460]: 2025-02-13 19:51:31.111 [INFO][4936] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" HandleID="k8s-pod-network.b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Workload="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:31.296216 containerd[1460]: 2025-02-13 19:51:31.115 [INFO][4922] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Namespace="calico-system" Pod="csi-node-driver-g7d74" WorkloadEndpoint="localhost-k8s-csi--node--driver--g7d74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g7d74-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g7d74", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e8606e1745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:31.296216 containerd[1460]: 2025-02-13 19:51:31.116 [INFO][4922] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Namespace="calico-system" Pod="csi-node-driver-g7d74" WorkloadEndpoint="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:31.296216 containerd[1460]: 2025-02-13 19:51:31.116 [INFO][4922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e8606e1745 ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Namespace="calico-system" Pod="csi-node-driver-g7d74" WorkloadEndpoint="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:31.296216 containerd[1460]: 2025-02-13 19:51:31.126 [INFO][4922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Namespace="calico-system" Pod="csi-node-driver-g7d74" WorkloadEndpoint="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:31.296216 containerd[1460]: 2025-02-13 19:51:31.126 [INFO][4922] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Namespace="calico-system" Pod="csi-node-driver-g7d74" WorkloadEndpoint="localhost-k8s-csi--node--driver--g7d74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g7d74-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3", Pod:"csi-node-driver-g7d74", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e8606e1745", MAC:"46:7f:12:be:a6:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:31.296216 containerd[1460]: 2025-02-13 19:51:31.291 [INFO][4922] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3" Namespace="calico-system" Pod="csi-node-driver-g7d74" WorkloadEndpoint="localhost-k8s-csi--node--driver--g7d74-eth0" Feb 13 19:51:31.358033 containerd[1460]: time="2025-02-13T19:51:31.357875996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:31.358827 containerd[1460]: time="2025-02-13T19:51:31.358630510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:31.358827 containerd[1460]: time="2025-02-13T19:51:31.358664946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:31.358934 containerd[1460]: time="2025-02-13T19:51:31.358900035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:31.388962 systemd[1]: Started cri-containerd-b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3.scope - libcontainer container b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3. Feb 13 19:51:31.403252 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:31.419927 containerd[1460]: time="2025-02-13T19:51:31.419869677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7d74,Uid:171a5a7b-3719-455d-8f5a-9cdf2ea5e0bd,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3\"" Feb 13 19:51:31.422284 containerd[1460]: time="2025-02-13T19:51:31.422054928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:51:32.339728 systemd-networkd[1400]: cali2e8606e1745: Gained IPv6LL Feb 13 19:51:32.531716 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL Feb 13 19:51:33.183728 containerd[1460]: time="2025-02-13T19:51:33.183657475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.184460 containerd[1460]: time="2025-02-13T19:51:33.184388171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:51:33.185839 containerd[1460]: time="2025-02-13T19:51:33.185763561Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.187925 containerd[1460]: time="2025-02-13T19:51:33.187896047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.188605 containerd[1460]: time="2025-02-13T19:51:33.188566479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.766471754s" Feb 13 19:51:33.188605 containerd[1460]: time="2025-02-13T19:51:33.188599432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:51:33.191258 containerd[1460]: time="2025-02-13T19:51:33.191229941Z" level=info msg="CreateContainer within sandbox \"b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:51:33.208339 containerd[1460]: time="2025-02-13T19:51:33.208304851Z" level=info msg="CreateContainer within sandbox \"b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0f45280eb9a2cc0092920f07bc2e2d3c19c5545611133520c898149f9c1c32a0\"" Feb 13 19:51:33.208872 containerd[1460]: time="2025-02-13T19:51:33.208785451Z" level=info msg="StartContainer for \"0f45280eb9a2cc0092920f07bc2e2d3c19c5545611133520c898149f9c1c32a0\"" Feb 13 19:51:33.239813 systemd[1]: Started cri-containerd-0f45280eb9a2cc0092920f07bc2e2d3c19c5545611133520c898149f9c1c32a0.scope - libcontainer container 0f45280eb9a2cc0092920f07bc2e2d3c19c5545611133520c898149f9c1c32a0. Feb 13 19:51:33.284135 containerd[1460]: time="2025-02-13T19:51:33.284067482Z" level=info msg="StartContainer for \"0f45280eb9a2cc0092920f07bc2e2d3c19c5545611133520c898149f9c1c32a0\" returns successfully" Feb 13 19:51:33.285352 containerd[1460]: time="2025-02-13T19:51:33.285304206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:51:33.948829 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:54130.service - OpenSSH per-connection server daemon (10.0.0.1:54130). Feb 13 19:51:33.991900 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 54130 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:33.993736 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:33.997960 systemd-logind[1442]: New session 21 of user core. Feb 13 19:51:34.006712 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:51:34.131015 sshd[5116]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:34.135610 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:54130.service: Deactivated successfully. Feb 13 19:51:34.137974 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:51:34.138702 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:51:34.140029 systemd-logind[1442]: Removed session 21. Feb 13 19:51:34.619411 kubelet[2512]: E0213 19:51:34.619373 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:35.016490 containerd[1460]: time="2025-02-13T19:51:35.016307147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:35.017158 containerd[1460]: time="2025-02-13T19:51:35.017102416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:51:35.018400 containerd[1460]: time="2025-02-13T19:51:35.018368383Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:35.020801 containerd[1460]: time="2025-02-13T19:51:35.020752206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:35.021492 containerd[1460]: time="2025-02-13T19:51:35.021449908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.736107189s" Feb 13 19:51:35.021598 containerd[1460]: time="2025-02-13T19:51:35.021489274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:51:35.023818 containerd[1460]: time="2025-02-13T19:51:35.023760130Z" level=info msg="CreateContainer within sandbox \"b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:51:35.043832 containerd[1460]: time="2025-02-13T19:51:35.043780435Z" level=info msg="CreateContainer within sandbox \"b0da3d900ff3e1daaef9b3995b706451453398007d7277f2bfe73942c328b2f3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"65d721f1a1e4502ea39b14ebb122fc284203e083789f47dd254039937a90b8e4\"" Feb 13 19:51:35.044321 containerd[1460]: time="2025-02-13T19:51:35.044292011Z" level=info msg="StartContainer for \"65d721f1a1e4502ea39b14ebb122fc284203e083789f47dd254039937a90b8e4\"" Feb 13 19:51:35.085687 systemd[1]: Started cri-containerd-65d721f1a1e4502ea39b14ebb122fc284203e083789f47dd254039937a90b8e4.scope - libcontainer container 65d721f1a1e4502ea39b14ebb122fc284203e083789f47dd254039937a90b8e4. Feb 13 19:51:35.118775 containerd[1460]: time="2025-02-13T19:51:35.118706297Z" level=info msg="StartContainer for \"65d721f1a1e4502ea39b14ebb122fc284203e083789f47dd254039937a90b8e4\" returns successfully" Feb 13 19:51:35.620606 containerd[1460]: time="2025-02-13T19:51:35.620416044Z" level=info msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\"" Feb 13 19:51:35.693127 kubelet[2512]: I0213 19:51:35.693055 2512 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:51:35.693127 kubelet[2512]: I0213 19:51:35.693096 2512 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.675 [INFO][5188] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.675 [INFO][5188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" iface="eth0" netns="/var/run/netns/cni-bed3d475-0091-423d-3178-2fa54578bae4" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.676 [INFO][5188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" iface="eth0" netns="/var/run/netns/cni-bed3d475-0091-423d-3178-2fa54578bae4" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.676 [INFO][5188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" iface="eth0" netns="/var/run/netns/cni-bed3d475-0091-423d-3178-2fa54578bae4" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.676 [INFO][5188] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.676 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.707 [INFO][5196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" HandleID="k8s-pod-network.c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Workload="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.707 [INFO][5196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.707 [INFO][5196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.713 [WARNING][5196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" HandleID="k8s-pod-network.c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Workload="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.713 [INFO][5196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" HandleID="k8s-pod-network.c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Workload="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.715 [INFO][5196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:35.721103 containerd[1460]: 2025-02-13 19:51:35.718 [INFO][5188] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131" Feb 13 19:51:35.721658 containerd[1460]: time="2025-02-13T19:51:35.721465237Z" level=info msg="TearDown network for sandbox \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\" successfully" Feb 13 19:51:35.721658 containerd[1460]: time="2025-02-13T19:51:35.721492449Z" level=info msg="StopPodSandbox for \"c38216d2380315dfacb15cbc770fd8d26c0b7c93b1a5da383eeec989e1aea131\" returns successfully" Feb 13 19:51:35.724484 systemd[1]: run-netns-cni\x2dbed3d475\x2d0091\x2d423d\x2d3178\x2d2fa54578bae4.mount: Deactivated successfully. Feb 13 19:51:35.724827 containerd[1460]: time="2025-02-13T19:51:35.724801288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589bd969f9-rph6v,Uid:91218b03-c787-48a1-bed0-596bf149fa36,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:51:35.811580 kubelet[2512]: I0213 19:51:35.811470 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g7d74" podStartSLOduration=66.210399665 podStartE2EDuration="1m9.811450349s" podCreationTimestamp="2025-02-13 19:50:26 +0000 UTC" firstStartedPulling="2025-02-13 19:51:31.421333068 +0000 UTC m=+76.902937530" lastFinishedPulling="2025-02-13 19:51:35.022383752 +0000 UTC m=+80.503988214" observedRunningTime="2025-02-13 19:51:35.809806289 +0000 UTC m=+81.291410752" watchObservedRunningTime="2025-02-13 19:51:35.811450349 +0000 UTC m=+81.293054811" Feb 13 19:51:35.853763 systemd-networkd[1400]: cali68ebfa570c9: Link UP Feb 13 19:51:35.854808 systemd-networkd[1400]: cali68ebfa570c9: Gained carrier Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.781 [INFO][5205] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0 calico-apiserver-589bd969f9- calico-apiserver 91218b03-c787-48a1-bed0-596bf149fa36 1161 0 2025-02-13 19:50:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:589bd969f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-589bd969f9-rph6v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68ebfa570c9 [] []}} ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-rph6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--rph6v-" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.781 [INFO][5205] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-rph6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.815 [INFO][5219] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" HandleID="k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Workload="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.824 [INFO][5219] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" HandleID="k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Workload="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000404c70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-589bd969f9-rph6v", "timestamp":"2025-02-13 19:51:35.81540964 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.824 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.824 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.824 [INFO][5219] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.826 [INFO][5219] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.830 [INFO][5219] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.835 [INFO][5219] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.836 [INFO][5219] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.838 [INFO][5219] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.838 [INFO][5219] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.840 [INFO][5219] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.844 [INFO][5219] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.848 [INFO][5219] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.848 [INFO][5219] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" host="localhost" Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.848 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:35.869988 containerd[1460]: 2025-02-13 19:51:35.848 [INFO][5219] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" HandleID="k8s-pod-network.5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Workload="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.870904 containerd[1460]: 2025-02-13 19:51:35.851 [INFO][5205] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-rph6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0", GenerateName:"calico-apiserver-589bd969f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"91218b03-c787-48a1-bed0-596bf149fa36", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589bd969f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-589bd969f9-rph6v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68ebfa570c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:35.870904 containerd[1460]: 2025-02-13 19:51:35.851 [INFO][5205] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-rph6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.870904 containerd[1460]: 2025-02-13 19:51:35.851 [INFO][5205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68ebfa570c9 ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-rph6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.870904 containerd[1460]: 2025-02-13 19:51:35.854 [INFO][5205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-rph6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.870904 containerd[1460]: 2025-02-13 19:51:35.854 [INFO][5205] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-rph6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0", GenerateName:"calico-apiserver-589bd969f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"91218b03-c787-48a1-bed0-596bf149fa36", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589bd969f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a", Pod:"calico-apiserver-589bd969f9-rph6v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68ebfa570c9", MAC:"06:34:ec:71:1e:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:35.870904 containerd[1460]: 2025-02-13 19:51:35.865 [INFO][5205] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-rph6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--rph6v-eth0" Feb 13 19:51:35.896151 containerd[1460]: time="2025-02-13T19:51:35.895951929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:35.896151 containerd[1460]: time="2025-02-13T19:51:35.896034206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:35.896151 containerd[1460]: time="2025-02-13T19:51:35.896050917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:35.896370 containerd[1460]: time="2025-02-13T19:51:35.896208749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:35.916821 systemd[1]: Started cri-containerd-5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a.scope - libcontainer container 5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a. Feb 13 19:51:35.929481 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:35.955299 containerd[1460]: time="2025-02-13T19:51:35.955103666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589bd969f9-rph6v,Uid:91218b03-c787-48a1-bed0-596bf149fa36,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a\"" Feb 13 19:51:35.956560 containerd[1460]: time="2025-02-13T19:51:35.956507617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:51:37.267808 systemd-networkd[1400]: cali68ebfa570c9: Gained IPv6LL Feb 13 19:51:37.620080 containerd[1460]: time="2025-02-13T19:51:37.619636387Z" level=info msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\"" Feb 13 19:51:37.620080 containerd[1460]: time="2025-02-13T19:51:37.619792835Z" level=info msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\"" Feb 13 19:51:37.620814 containerd[1460]: time="2025-02-13T19:51:37.620751505Z" level=info msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\"" Feb 13 19:51:37.620852 kubelet[2512]: E0213 19:51:37.620204 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.688 [INFO][5346] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5346] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" iface="eth0" netns="/var/run/netns/cni-94e0de66-4e8f-048d-c88c-f80c5d8e1eef" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5346] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" iface="eth0" netns="/var/run/netns/cni-94e0de66-4e8f-048d-c88c-f80c5d8e1eef" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5346] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" iface="eth0" netns="/var/run/netns/cni-94e0de66-4e8f-048d-c88c-f80c5d8e1eef" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5346] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.722 [INFO][5360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" HandleID="k8s-pod-network.aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Workload="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.722 [INFO][5360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.722 [INFO][5360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.729 [WARNING][5360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" HandleID="k8s-pod-network.aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Workload="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.729 [INFO][5360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" HandleID="k8s-pod-network.aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Workload="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.731 [INFO][5360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:37.736122 containerd[1460]: 2025-02-13 19:51:37.733 [INFO][5346] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6" Feb 13 19:51:37.736938 containerd[1460]: time="2025-02-13T19:51:37.736896936Z" level=info msg="TearDown network for sandbox \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\" successfully" Feb 13 19:51:37.736938 containerd[1460]: time="2025-02-13T19:51:37.736936702Z" level=info msg="StopPodSandbox for \"aac154ecee5cb1ed63d1e2d3f26c635b623739e36effaeb5af6a4fdc77783db6\" returns successfully" Feb 13 19:51:37.738787 containerd[1460]: time="2025-02-13T19:51:37.738748579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589bd969f9-h7vkc,Uid:9521aaa2-a8f7-4df8-a700-3d246b89217a,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:51:37.741076 systemd[1]: run-netns-cni\x2d94e0de66\x2d4e8f\x2d048d\x2dc88c\x2df80c5d8e1eef.mount: Deactivated successfully. Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.688 [INFO][5333] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5333] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" iface="eth0" netns="/var/run/netns/cni-c15e03fc-8cd7-206f-21cc-c888f4965da3" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5333] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" iface="eth0" netns="/var/run/netns/cni-c15e03fc-8cd7-206f-21cc-c888f4965da3" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5333] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" iface="eth0" netns="/var/run/netns/cni-c15e03fc-8cd7-206f-21cc-c888f4965da3" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.689 [INFO][5333] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.690 [INFO][5333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.723 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" HandleID="k8s-pod-network.43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Workload="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.723 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.731 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.738 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" HandleID="k8s-pod-network.43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Workload="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.738 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" HandleID="k8s-pod-network.43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Workload="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.740 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:37.747752 containerd[1460]: 2025-02-13 19:51:37.743 [INFO][5333] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135" Feb 13 19:51:37.750902 containerd[1460]: time="2025-02-13T19:51:37.748197448Z" level=info msg="TearDown network for sandbox \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\" successfully" Feb 13 19:51:37.750902 containerd[1460]: time="2025-02-13T19:51:37.748223528Z" level=info msg="StopPodSandbox for \"43914c5200be7a0498ab71f9334d3bb8eb94010d0e3ca6370a65b5c76f172135\" returns successfully" Feb 13 19:51:37.750902 containerd[1460]: time="2025-02-13T19:51:37.749679786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lkr85,Uid:d19124ca-1721-4ff8-b4f5-05a576fbbc55,Namespace:kube-system,Attempt:1,}" Feb 13 19:51:37.751016 kubelet[2512]: E0213 19:51:37.748831 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:37.751943 systemd[1]: run-netns-cni\x2dc15e03fc\x2d8cd7\x2d206f\x2d21cc\x2dc888f4965da3.mount: Deactivated successfully. Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.693 [INFO][5331] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.693 [INFO][5331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" iface="eth0" netns="/var/run/netns/cni-f38da7c9-1eb1-55e4-4d47-ec323c319db3" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.694 [INFO][5331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" iface="eth0" netns="/var/run/netns/cni-f38da7c9-1eb1-55e4-4d47-ec323c319db3" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.694 [INFO][5331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" iface="eth0" netns="/var/run/netns/cni-f38da7c9-1eb1-55e4-4d47-ec323c319db3" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.694 [INFO][5331] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.694 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.727 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" HandleID="k8s-pod-network.f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Workload="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.727 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.740 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.748 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" HandleID="k8s-pod-network.f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Workload="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.748 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" HandleID="k8s-pod-network.f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Workload="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.751 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:37.761788 containerd[1460]: 2025-02-13 19:51:37.756 [INFO][5331] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434" Feb 13 19:51:37.764457 containerd[1460]: time="2025-02-13T19:51:37.762018459Z" level=info msg="TearDown network for sandbox \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\" successfully" Feb 13 19:51:37.764457 containerd[1460]: time="2025-02-13T19:51:37.762052785Z" level=info msg="StopPodSandbox for \"f29274b61e0580d9eb7801359818e87c0ed2862c6405d1c11c7aec1be0fcd434\" returns successfully" Feb 13 19:51:37.764562 kubelet[2512]: E0213 19:51:37.762440 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:37.764961 containerd[1460]: time="2025-02-13T19:51:37.764857086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ht7r8,Uid:737f6164-6397-4855-90ba-00598f17612b,Namespace:kube-system,Attempt:1,}" Feb 13 19:51:37.766361 systemd[1]: run-netns-cni\x2df38da7c9\x2d1eb1\x2d55e4\x2d4d47\x2dec323c319db3.mount: Deactivated successfully. Feb 13 19:51:37.972887 systemd-networkd[1400]: cali8a05569b581: Link UP Feb 13 19:51:37.973765 systemd-networkd[1400]: cali8a05569b581: Gained carrier Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.841 [INFO][5382] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0 calico-apiserver-589bd969f9- calico-apiserver 9521aaa2-a8f7-4df8-a700-3d246b89217a 1180 0 2025-02-13 19:50:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:589bd969f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-589bd969f9-h7vkc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a05569b581 [] []}} ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-h7vkc" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.843 [INFO][5382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-h7vkc" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.893 [INFO][5427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" HandleID="k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Workload="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.915 [INFO][5427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" HandleID="k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Workload="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e25e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-589bd969f9-h7vkc", "timestamp":"2025-02-13 19:51:37.89304226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.915 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.915 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.918 [INFO][5427] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.923 [INFO][5427] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.934 [INFO][5427] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.941 [INFO][5427] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.946 [INFO][5427] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.950 [INFO][5427] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.950 [INFO][5427] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.952 [INFO][5427] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541 Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.959 [INFO][5427] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.966 [INFO][5427] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.966 [INFO][5427] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" host="localhost" Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.966 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:37.992925 containerd[1460]: 2025-02-13 19:51:37.966 [INFO][5427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" HandleID="k8s-pod-network.c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Workload="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.994566 containerd[1460]: 2025-02-13 19:51:37.970 [INFO][5382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-h7vkc" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0", GenerateName:"calico-apiserver-589bd969f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9521aaa2-a8f7-4df8-a700-3d246b89217a", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589bd969f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-589bd969f9-h7vkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a05569b581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:37.994566 containerd[1460]: 2025-02-13 19:51:37.970 [INFO][5382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-h7vkc" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.994566 containerd[1460]: 2025-02-13 19:51:37.970 [INFO][5382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a05569b581 ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-h7vkc" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.994566 containerd[1460]: 2025-02-13 19:51:37.973 [INFO][5382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-h7vkc" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:37.994566 containerd[1460]: 2025-02-13 19:51:37.974 [INFO][5382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-h7vkc" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0", GenerateName:"calico-apiserver-589bd969f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9521aaa2-a8f7-4df8-a700-3d246b89217a", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589bd969f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541", Pod:"calico-apiserver-589bd969f9-h7vkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a05569b581", MAC:"0a:3d:47:ec:58:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:37.994566 containerd[1460]: 2025-02-13 19:51:37.987 [INFO][5382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541" Namespace="calico-apiserver" Pod="calico-apiserver-589bd969f9-h7vkc" WorkloadEndpoint="localhost-k8s-calico--apiserver--589bd969f9--h7vkc-eth0" Feb 13 19:51:38.218903 systemd-networkd[1400]: cali5be15273fd4: Link UP Feb 13 19:51:38.219362 systemd-networkd[1400]: cali5be15273fd4: Gained carrier Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:37.883 [INFO][5396] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lkr85-eth0 coredns-668d6bf9bc- kube-system d19124ca-1721-4ff8-b4f5-05a576fbbc55 1179 0 2025-02-13 19:50:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lkr85 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5be15273fd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Namespace="kube-system" Pod="coredns-668d6bf9bc-lkr85" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lkr85-" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:37.883 [INFO][5396] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Namespace="kube-system" Pod="coredns-668d6bf9bc-lkr85" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:37.939 [INFO][5434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" HandleID="k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Workload="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:37.948 [INFO][5434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" HandleID="k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Workload="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004444f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lkr85", "timestamp":"2025-02-13 19:51:37.939220841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:37.948 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:37.966 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:37.966 [INFO][5434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.022 [INFO][5434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.028 [INFO][5434] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.042 [INFO][5434] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.044 [INFO][5434] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.047 [INFO][5434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.047 [INFO][5434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.049 [INFO][5434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.056 [INFO][5434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.211 [INFO][5434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.211 [INFO][5434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" host="localhost" Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.211 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:38.258412 containerd[1460]: 2025-02-13 19:51:38.211 [INFO][5434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" HandleID="k8s-pod-network.b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Workload="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:38.259452 containerd[1460]: 2025-02-13 19:51:38.214 [INFO][5396] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Namespace="kube-system" Pod="coredns-668d6bf9bc-lkr85" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lkr85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d19124ca-1721-4ff8-b4f5-05a576fbbc55", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lkr85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5be15273fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:38.259452 containerd[1460]: 2025-02-13 19:51:38.214 [INFO][5396] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Namespace="kube-system" Pod="coredns-668d6bf9bc-lkr85" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:38.259452 containerd[1460]: 2025-02-13 19:51:38.215 [INFO][5396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5be15273fd4 ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Namespace="kube-system" Pod="coredns-668d6bf9bc-lkr85" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:38.259452 containerd[1460]: 2025-02-13 19:51:38.217 [INFO][5396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Namespace="kube-system" Pod="coredns-668d6bf9bc-lkr85" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:38.259452 containerd[1460]: 2025-02-13 19:51:38.217 [INFO][5396] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Namespace="kube-system" Pod="coredns-668d6bf9bc-lkr85" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lkr85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d19124ca-1721-4ff8-b4f5-05a576fbbc55", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a", Pod:"coredns-668d6bf9bc-lkr85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5be15273fd4", MAC:"1a:43:2a:d5:8e:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:38.259452 containerd[1460]: 2025-02-13 19:51:38.255 [INFO][5396] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a" Namespace="kube-system" Pod="coredns-668d6bf9bc-lkr85" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lkr85-eth0" Feb 13 19:51:38.290311 containerd[1460]: time="2025-02-13T19:51:38.290174119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:38.290311 containerd[1460]: time="2025-02-13T19:51:38.290247218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:38.290311 containerd[1460]: time="2025-02-13T19:51:38.290264872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.290618 containerd[1460]: time="2025-02-13T19:51:38.290372296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.306336 containerd[1460]: time="2025-02-13T19:51:38.305193664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:38.306336 containerd[1460]: time="2025-02-13T19:51:38.305266032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:38.306336 containerd[1460]: time="2025-02-13T19:51:38.305297542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.306336 containerd[1460]: time="2025-02-13T19:51:38.305400399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.310322 systemd-networkd[1400]: calif1caadd1261: Link UP Feb 13 19:51:38.312389 systemd-networkd[1400]: calif1caadd1261: Gained carrier Feb 13 19:51:38.324836 systemd[1]: Started cri-containerd-c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541.scope - libcontainer container c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541. Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:37.904 [INFO][5418] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0 coredns-668d6bf9bc- kube-system 737f6164-6397-4855-90ba-00598f17612b 1181 0 2025-02-13 19:50:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ht7r8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif1caadd1261 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ht7r8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ht7r8-" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:37.905 [INFO][5418] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ht7r8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:37.971 [INFO][5439] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" HandleID="k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Workload="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.013 [INFO][5439] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" HandleID="k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Workload="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000541870), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ht7r8", "timestamp":"2025-02-13 19:51:37.971053739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.013 [INFO][5439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.211 [INFO][5439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.212 [INFO][5439] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.215 [INFO][5439] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.223 [INFO][5439] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.261 [INFO][5439] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.263 [INFO][5439] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.267 [INFO][5439] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.267 [INFO][5439] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.270 [INFO][5439] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.281 [INFO][5439] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.294 [INFO][5439] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.294 [INFO][5439] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" host="localhost" Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.295 [INFO][5439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:38.333615 containerd[1460]: 2025-02-13 19:51:38.295 [INFO][5439] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" HandleID="k8s-pod-network.149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Workload="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:38.334431 containerd[1460]: 2025-02-13 19:51:38.302 [INFO][5418] cni-plugin/k8s.go 386: Populated endpoint ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ht7r8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"737f6164-6397-4855-90ba-00598f17612b", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ht7r8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1caadd1261", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:38.334431 containerd[1460]: 2025-02-13 19:51:38.303 [INFO][5418] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ht7r8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:38.334431 containerd[1460]: 2025-02-13 19:51:38.303 [INFO][5418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1caadd1261 ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ht7r8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:38.334431 containerd[1460]: 2025-02-13 19:51:38.311 [INFO][5418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ht7r8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:38.334431 containerd[1460]: 2025-02-13 19:51:38.313 [INFO][5418] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ht7r8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"737f6164-6397-4855-90ba-00598f17612b", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b", Pod:"coredns-668d6bf9bc-ht7r8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1caadd1261", MAC:"e6:af:69:ed:f6:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:38.334431 containerd[1460]: 2025-02-13 19:51:38.329 [INFO][5418] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-ht7r8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ht7r8-eth0" Feb 13 19:51:38.337820 systemd[1]: Started cri-containerd-b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a.scope - libcontainer container b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a. Feb 13 19:51:38.356606 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:38.365982 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:38.390474 containerd[1460]: time="2025-02-13T19:51:38.389970202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:38.390474 containerd[1460]: time="2025-02-13T19:51:38.390355999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:38.391240 containerd[1460]: time="2025-02-13T19:51:38.390372710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.392124 containerd[1460]: time="2025-02-13T19:51:38.392022467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.395420 containerd[1460]: time="2025-02-13T19:51:38.395365403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lkr85,Uid:d19124ca-1721-4ff8-b4f5-05a576fbbc55,Namespace:kube-system,Attempt:1,} returns sandbox id \"b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a\"" Feb 13 19:51:38.399153 kubelet[2512]: E0213 19:51:38.399112 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.410514 containerd[1460]: time="2025-02-13T19:51:38.410447668Z" level=info msg="CreateContainer within sandbox \"b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:51:38.421801 systemd[1]: Started cri-containerd-149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b.scope - libcontainer container 149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b. Feb 13 19:51:38.423478 containerd[1460]: time="2025-02-13T19:51:38.423384511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589bd969f9-h7vkc,Uid:9521aaa2-a8f7-4df8-a700-3d246b89217a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541\"" Feb 13 19:51:38.438425 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:38.454119 containerd[1460]: time="2025-02-13T19:51:38.453261634Z" level=info msg="CreateContainer within sandbox \"b9b3528b5079753783678070eee70bf091f6a89823775726379470562ec8785a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd94677a212f5f00d83f23afee7beb2253e3ed461c7cf98f69a5e4c23664eec6\"" Feb 13 19:51:38.455570 containerd[1460]: time="2025-02-13T19:51:38.454757567Z" level=info msg="StartContainer for \"fd94677a212f5f00d83f23afee7beb2253e3ed461c7cf98f69a5e4c23664eec6\"" Feb 13 19:51:38.473474 containerd[1460]: time="2025-02-13T19:51:38.473432314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ht7r8,Uid:737f6164-6397-4855-90ba-00598f17612b,Namespace:kube-system,Attempt:1,} returns sandbox id \"149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b\"" Feb 13 19:51:38.474575 kubelet[2512]: E0213 19:51:38.474539 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.478681 containerd[1460]: time="2025-02-13T19:51:38.478638944Z" level=info msg="CreateContainer within sandbox \"149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:51:38.500767 systemd[1]: Started cri-containerd-fd94677a212f5f00d83f23afee7beb2253e3ed461c7cf98f69a5e4c23664eec6.scope - libcontainer container fd94677a212f5f00d83f23afee7beb2253e3ed461c7cf98f69a5e4c23664eec6. Feb 13 19:51:38.521498 containerd[1460]: time="2025-02-13T19:51:38.519958830Z" level=info msg="CreateContainer within sandbox \"149c262f8a7536ddc5c24b7b952b53810e63a88494ee3dbc261135e479496d1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2b0b28e9009d08a3dc88f766b9a3fab931154fa35c8d7c65001d4f53e678d5d\"" Feb 13 19:51:38.521648 containerd[1460]: time="2025-02-13T19:51:38.521609790Z" level=info msg="StartContainer for \"d2b0b28e9009d08a3dc88f766b9a3fab931154fa35c8d7c65001d4f53e678d5d\"" Feb 13 19:51:38.557684 systemd[1]: Started cri-containerd-d2b0b28e9009d08a3dc88f766b9a3fab931154fa35c8d7c65001d4f53e678d5d.scope - libcontainer container d2b0b28e9009d08a3dc88f766b9a3fab931154fa35c8d7c65001d4f53e678d5d. Feb 13 19:51:38.883460 containerd[1460]: time="2025-02-13T19:51:38.883201156Z" level=info msg="StartContainer for \"fd94677a212f5f00d83f23afee7beb2253e3ed461c7cf98f69a5e4c23664eec6\" returns successfully" Feb 13 19:51:38.883460 containerd[1460]: time="2025-02-13T19:51:38.883422128Z" level=info msg="StartContainer for \"d2b0b28e9009d08a3dc88f766b9a3fab931154fa35c8d7c65001d4f53e678d5d\" returns successfully" Feb 13 19:51:38.889087 kubelet[2512]: E0213 19:51:38.888969 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.891620 kubelet[2512]: E0213 19:51:38.891548 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:38.910602 kubelet[2512]: I0213 19:51:38.908994 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lkr85" podStartSLOduration=79.908971827 podStartE2EDuration="1m19.908971827s" podCreationTimestamp="2025-02-13 19:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:38.908015793 +0000 UTC m=+84.389620255" watchObservedRunningTime="2025-02-13 19:51:38.908971827 +0000 UTC m=+84.390576289" Feb 13 19:51:38.954117 containerd[1460]: time="2025-02-13T19:51:38.954016387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:38.956225 containerd[1460]: time="2025-02-13T19:51:38.956169163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:51:38.960472 containerd[1460]: time="2025-02-13T19:51:38.960400954Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:38.963654 containerd[1460]: time="2025-02-13T19:51:38.963597501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:38.965313 containerd[1460]: time="2025-02-13T19:51:38.964366938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.007806339s" Feb 13 19:51:38.965313 containerd[1460]: time="2025-02-13T19:51:38.964418526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:51:38.967616 containerd[1460]: time="2025-02-13T19:51:38.966576983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:51:38.967616 containerd[1460]: time="2025-02-13T19:51:38.967452983Z" level=info msg="CreateContainer within sandbox \"5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:51:39.002627 containerd[1460]: time="2025-02-13T19:51:39.002568655Z" level=info msg="CreateContainer within sandbox \"5ff46cd677dfaa99734b02bd81a7ada2a862478f30f173ceb7bcb7f323b5a70a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"52ab5599836ce6c975a010bd7bfff9a19b8232f9c347ee9733b7bd4cfb0c0438\"" Feb 13 19:51:39.003126 containerd[1460]: time="2025-02-13T19:51:39.003080442Z" level=info msg="StartContainer for \"52ab5599836ce6c975a010bd7bfff9a19b8232f9c347ee9733b7bd4cfb0c0438\"" Feb 13 19:51:39.042804 systemd[1]: Started cri-containerd-52ab5599836ce6c975a010bd7bfff9a19b8232f9c347ee9733b7bd4cfb0c0438.scope - libcontainer container 52ab5599836ce6c975a010bd7bfff9a19b8232f9c347ee9733b7bd4cfb0c0438. Feb 13 19:51:39.061028 systemd-networkd[1400]: cali8a05569b581: Gained IPv6LL Feb 13 19:51:39.149801 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:54134.service - OpenSSH per-connection server daemon (10.0.0.1:54134). Feb 13 19:51:39.225247 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 54134 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:39.227677 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:39.229687 containerd[1460]: time="2025-02-13T19:51:39.229634921Z" level=info msg="StartContainer for \"52ab5599836ce6c975a010bd7bfff9a19b8232f9c347ee9733b7bd4cfb0c0438\" returns successfully" Feb 13 19:51:39.232656 systemd-logind[1442]: New session 22 of user core. Feb 13 19:51:39.240724 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:51:39.437747 sshd[5734]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:39.448344 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:54134.service: Deactivated successfully. Feb 13 19:51:39.450144 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:51:39.451184 containerd[1460]: time="2025-02-13T19:51:39.451128813Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:39.452742 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:51:39.453844 containerd[1460]: time="2025-02-13T19:51:39.453572311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:51:39.456228 containerd[1460]: time="2025-02-13T19:51:39.456192427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 489.561742ms" Feb 13 19:51:39.456228 containerd[1460]: time="2025-02-13T19:51:39.456223396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:51:39.459078 containerd[1460]: time="2025-02-13T19:51:39.458918183Z" level=info msg="CreateContainer within sandbox \"c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:51:39.462400 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). Feb 13 19:51:39.464100 systemd-logind[1442]: Removed session 22. Feb 13 19:51:39.480872 containerd[1460]: time="2025-02-13T19:51:39.480820919Z" level=info msg="CreateContainer within sandbox \"c73e7f332d4074ac83faa98f80aeba6bba8b527346460360e147cc28c6471541\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6122820c43996bc013dbb173cf7ba00400edc2fad5921ba26a3b8b991b6e3cba\"" Feb 13 19:51:39.481636 containerd[1460]: time="2025-02-13T19:51:39.481512858Z" level=info msg="StartContainer for \"6122820c43996bc013dbb173cf7ba00400edc2fad5921ba26a3b8b991b6e3cba\"" Feb 13 19:51:39.498904 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:39.502575 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:39.510378 systemd-logind[1442]: New session 23 of user core. Feb 13 19:51:39.524734 systemd[1]: Started cri-containerd-6122820c43996bc013dbb173cf7ba00400edc2fad5921ba26a3b8b991b6e3cba.scope - libcontainer container 6122820c43996bc013dbb173cf7ba00400edc2fad5921ba26a3b8b991b6e3cba. Feb 13 19:51:39.526096 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:51:39.663170 containerd[1460]: time="2025-02-13T19:51:39.662982767Z" level=info msg="StartContainer for \"6122820c43996bc013dbb173cf7ba00400edc2fad5921ba26a3b8b991b6e3cba\" returns successfully" Feb 13 19:51:39.699772 systemd-networkd[1400]: cali5be15273fd4: Gained IPv6LL Feb 13 19:51:39.744991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457101951.mount: Deactivated successfully. Feb 13 19:51:39.764805 systemd-networkd[1400]: calif1caadd1261: Gained IPv6LL Feb 13 19:51:39.901637 kubelet[2512]: E0213 19:51:39.900789 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:39.901637 kubelet[2512]: E0213 19:51:39.900823 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:39.980874 kubelet[2512]: I0213 19:51:39.980639 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ht7r8" podStartSLOduration=80.980614579 podStartE2EDuration="1m20.980614579s" podCreationTimestamp="2025-02-13 19:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:38.927423058 +0000 UTC m=+84.409027520" watchObservedRunningTime="2025-02-13 19:51:39.980614579 +0000 UTC m=+85.462219061" Feb 13 19:51:39.981335 kubelet[2512]: I0213 19:51:39.980989 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-589bd969f9-rph6v" podStartSLOduration=70.971820816 podStartE2EDuration="1m13.980982992s" podCreationTimestamp="2025-02-13 19:50:26 +0000 UTC" firstStartedPulling="2025-02-13 19:51:35.956283519 +0000 UTC m=+81.437887981" lastFinishedPulling="2025-02-13 19:51:38.965445695 +0000 UTC m=+84.447050157" observedRunningTime="2025-02-13 19:51:39.97932943 +0000 UTC m=+85.460933892" watchObservedRunningTime="2025-02-13 19:51:39.980982992 +0000 UTC m=+85.462587454" Feb 13 19:51:40.044164 kubelet[2512]: I0213 19:51:40.044068 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-589bd969f9-h7vkc" podStartSLOduration=73.011895622 podStartE2EDuration="1m14.044048628s" podCreationTimestamp="2025-02-13 19:50:26 +0000 UTC" firstStartedPulling="2025-02-13 19:51:38.424858192 +0000 UTC m=+83.906462654" lastFinishedPulling="2025-02-13 19:51:39.457011198 +0000 UTC m=+84.938615660" observedRunningTime="2025-02-13 19:51:40.043602177 +0000 UTC m=+85.525206639" watchObservedRunningTime="2025-02-13 19:51:40.044048628 +0000 UTC m=+85.525653090" Feb 13 19:51:40.282101 sshd[5757]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:40.298141 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:38176.service - OpenSSH per-connection server daemon (10.0.0.1:38176). Feb 13 19:51:40.298903 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:38168.service: Deactivated successfully. Feb 13 19:51:40.302682 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:51:40.306137 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:51:40.308630 systemd-logind[1442]: Removed session 23. Feb 13 19:51:40.337859 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 38176 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:40.339609 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:40.344098 systemd-logind[1442]: New session 24 of user core. Feb 13 19:51:40.354667 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:51:40.621387 containerd[1460]: time="2025-02-13T19:51:40.620865891Z" level=info msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\"" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.682 [INFO][5835] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.682 [INFO][5835] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" iface="eth0" netns="/var/run/netns/cni-8bde1f78-4bc7-5ea0-31be-4b1b15ba949d" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.683 [INFO][5835] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" iface="eth0" netns="/var/run/netns/cni-8bde1f78-4bc7-5ea0-31be-4b1b15ba949d" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.685 [INFO][5835] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" iface="eth0" netns="/var/run/netns/cni-8bde1f78-4bc7-5ea0-31be-4b1b15ba949d" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.685 [INFO][5835] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.685 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.734 [INFO][5843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" HandleID="k8s-pod-network.39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Workload="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.735 [INFO][5843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.735 [INFO][5843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.751 [WARNING][5843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" HandleID="k8s-pod-network.39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Workload="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.751 [INFO][5843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" HandleID="k8s-pod-network.39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Workload="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.753 [INFO][5843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:40.760548 containerd[1460]: 2025-02-13 19:51:40.756 [INFO][5835] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b" Feb 13 19:51:40.761049 containerd[1460]: time="2025-02-13T19:51:40.760785460Z" level=info msg="TearDown network for sandbox \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\" successfully" Feb 13 19:51:40.761049 containerd[1460]: time="2025-02-13T19:51:40.760813854Z" level=info msg="StopPodSandbox for \"39c95d58873a8f76ba48edb16cabbad7fc6d04a95f6e26fa80f630a6fe610f0b\" returns successfully" Feb 13 19:51:40.761716 containerd[1460]: time="2025-02-13T19:51:40.761671960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d4c45f-7bhrp,Uid:7f04c116-34a0-411d-a3e2-e79b0fc5cc48,Namespace:calico-system,Attempt:1,}" Feb 13 19:51:40.764151 systemd[1]: run-netns-cni\x2d8bde1f78\x2d4bc7\x2d5ea0\x2d31be\x2d4b1b15ba949d.mount: Deactivated successfully. Feb 13 19:51:40.901849 kubelet[2512]: E0213 19:51:40.901731 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:40.901849 kubelet[2512]: E0213 19:51:40.901793 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:41.127255 systemd-networkd[1400]: calid2a75f5b3cb: Link UP Feb 13 19:51:41.129743 systemd-networkd[1400]: calid2a75f5b3cb: Gained carrier Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.049 [INFO][5853] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0 calico-kube-controllers-59d4c45f- calico-system 7f04c116-34a0-411d-a3e2-e79b0fc5cc48 1248 0 2025-02-13 19:50:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59d4c45f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59d4c45f-7bhrp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid2a75f5b3cb [] []}} ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Namespace="calico-system" Pod="calico-kube-controllers-59d4c45f-7bhrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.049 [INFO][5853] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Namespace="calico-system" Pod="calico-kube-controllers-59d4c45f-7bhrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.078 [INFO][5870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" HandleID="k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Workload="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.094 [INFO][5870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" HandleID="k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Workload="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59d4c45f-7bhrp", "timestamp":"2025-02-13 19:51:41.078165528 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.094 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.094 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.094 [INFO][5870] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.096 [INFO][5870] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.100 [INFO][5870] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.103 [INFO][5870] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.105 [INFO][5870] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.108 [INFO][5870] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.108 [INFO][5870] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.109 [INFO][5870] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1 Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.113 [INFO][5870] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.118 [INFO][5870] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.119 [INFO][5870] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" host="localhost" Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.119 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:41.147201 containerd[1460]: 2025-02-13 19:51:41.119 [INFO][5870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" HandleID="k8s-pod-network.1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Workload="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:41.148009 containerd[1460]: 2025-02-13 19:51:41.123 [INFO][5853] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Namespace="calico-system" Pod="calico-kube-controllers-59d4c45f-7bhrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0", GenerateName:"calico-kube-controllers-59d4c45f-", Namespace:"calico-system", SelfLink:"", UID:"7f04c116-34a0-411d-a3e2-e79b0fc5cc48", ResourceVersion:"1248", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59d4c45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59d4c45f-7bhrp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2a75f5b3cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:41.148009 containerd[1460]: 2025-02-13 19:51:41.123 [INFO][5853] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Namespace="calico-system" Pod="calico-kube-controllers-59d4c45f-7bhrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:41.148009 containerd[1460]: 2025-02-13 19:51:41.123 [INFO][5853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2a75f5b3cb ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Namespace="calico-system" Pod="calico-kube-controllers-59d4c45f-7bhrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:41.148009 containerd[1460]: 2025-02-13 19:51:41.129 [INFO][5853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Namespace="calico-system" Pod="calico-kube-controllers-59d4c45f-7bhrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:41.148009 containerd[1460]: 2025-02-13 19:51:41.131 [INFO][5853] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Namespace="calico-system" Pod="calico-kube-controllers-59d4c45f-7bhrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0", GenerateName:"calico-kube-controllers-59d4c45f-", Namespace:"calico-system", SelfLink:"", UID:"7f04c116-34a0-411d-a3e2-e79b0fc5cc48", ResourceVersion:"1248", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59d4c45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1", Pod:"calico-kube-controllers-59d4c45f-7bhrp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2a75f5b3cb", MAC:"f6:6d:64:e6:fd:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:41.148009 containerd[1460]: 2025-02-13 19:51:41.143 [INFO][5853] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1" Namespace="calico-system" Pod="calico-kube-controllers-59d4c45f-7bhrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59d4c45f--7bhrp-eth0" Feb 13 19:51:41.187157 containerd[1460]: time="2025-02-13T19:51:41.186955352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:41.187157 containerd[1460]: time="2025-02-13T19:51:41.187065252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:41.187157 containerd[1460]: time="2025-02-13T19:51:41.187083566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:41.187308 containerd[1460]: time="2025-02-13T19:51:41.187206030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:41.228770 systemd[1]: Started cri-containerd-1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1.scope - libcontainer container 1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1. Feb 13 19:51:41.243443 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:41.275336 containerd[1460]: time="2025-02-13T19:51:41.275287787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d4c45f-7bhrp,Uid:7f04c116-34a0-411d-a3e2-e79b0fc5cc48,Namespace:calico-system,Attempt:1,} returns sandbox id \"1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1\"" Feb 13 19:51:41.277876 containerd[1460]: time="2025-02-13T19:51:41.277846480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:51:41.357779 sshd[5808]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:41.366766 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:38176.service: Deactivated successfully. Feb 13 19:51:41.369022 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:51:41.372278 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:51:41.378115 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:38186.service - OpenSSH per-connection server daemon (10.0.0.1:38186). Feb 13 19:51:41.379083 systemd-logind[1442]: Removed session 24. Feb 13 19:51:41.415914 sshd[5936]: Accepted publickey for core from 10.0.0.1 port 38186 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:41.417442 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:41.421887 systemd-logind[1442]: New session 25 of user core. Feb 13 19:51:41.432699 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:51:41.657849 sshd[5936]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:41.666786 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:38186.service: Deactivated successfully. Feb 13 19:51:41.668974 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:51:41.673331 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:51:41.681845 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:38192.service - OpenSSH per-connection server daemon (10.0.0.1:38192). Feb 13 19:51:41.682842 systemd-logind[1442]: Removed session 25. Feb 13 19:51:41.713720 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 38192 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:41.715674 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:41.720888 systemd-logind[1442]: New session 26 of user core. Feb 13 19:51:41.725736 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:51:41.845734 sshd[5955]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:41.850036 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:38192.service: Deactivated successfully. Feb 13 19:51:41.852118 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:51:41.852883 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:51:41.853830 systemd-logind[1442]: Removed session 26. Feb 13 19:51:43.155749 systemd-networkd[1400]: calid2a75f5b3cb: Gained IPv6LL Feb 13 19:51:44.311202 containerd[1460]: time="2025-02-13T19:51:44.311111174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:44.312843 containerd[1460]: time="2025-02-13T19:51:44.312761204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:51:44.314259 containerd[1460]: time="2025-02-13T19:51:44.314199630Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:44.316908 containerd[1460]: time="2025-02-13T19:51:44.316828923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:44.317436 containerd[1460]: time="2025-02-13T19:51:44.317393828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.039345303s" Feb 13 19:51:44.317436 containerd[1460]: time="2025-02-13T19:51:44.317431260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:51:44.330779 containerd[1460]: time="2025-02-13T19:51:44.330715827Z" level=info msg="CreateContainer within sandbox \"1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:51:44.357607 containerd[1460]: time="2025-02-13T19:51:44.357397768Z" level=info msg="CreateContainer within sandbox \"1dadd3b798aa5dd78a12ce4d621b4ab3aa3aeee74e9f118a2e9baa463e0919e1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b935c87a71c5224840b051311b0c371063f52b5f915e55bc01fff1c144dd8310\"" Feb 13 19:51:44.358291 containerd[1460]: time="2025-02-13T19:51:44.358175096Z" level=info msg="StartContainer for \"b935c87a71c5224840b051311b0c371063f52b5f915e55bc01fff1c144dd8310\"" Feb 13 19:51:44.396799 systemd[1]: Started cri-containerd-b935c87a71c5224840b051311b0c371063f52b5f915e55bc01fff1c144dd8310.scope - libcontainer container b935c87a71c5224840b051311b0c371063f52b5f915e55bc01fff1c144dd8310. Feb 13 19:51:44.514851 containerd[1460]: time="2025-02-13T19:51:44.514797837Z" level=info msg="StartContainer for \"b935c87a71c5224840b051311b0c371063f52b5f915e55bc01fff1c144dd8310\" returns successfully" Feb 13 19:51:44.985090 kubelet[2512]: I0213 19:51:44.984964 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59d4c45f-7bhrp" podStartSLOduration=75.943393475 podStartE2EDuration="1m18.98494079s" podCreationTimestamp="2025-02-13 19:50:26 +0000 UTC" firstStartedPulling="2025-02-13 19:51:41.2768536 +0000 UTC m=+86.758458062" lastFinishedPulling="2025-02-13 19:51:44.318400925 +0000 UTC m=+89.800005377" observedRunningTime="2025-02-13 19:51:44.930356237 +0000 UTC m=+90.411960699" watchObservedRunningTime="2025-02-13 19:51:44.98494079 +0000 UTC m=+90.466545252" Feb 13 19:51:46.863081 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:38206.service - OpenSSH per-connection server daemon (10.0.0.1:38206). Feb 13 19:51:46.907155 sshd[6041]: Accepted publickey for core from 10.0.0.1 port 38206 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:46.909447 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:46.914160 systemd-logind[1442]: New session 27 of user core. Feb 13 19:51:46.921806 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:51:47.041870 sshd[6041]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:47.045140 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:38206.service: Deactivated successfully. Feb 13 19:51:47.047384 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:51:47.049243 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:51:47.050389 systemd-logind[1442]: Removed session 27. Feb 13 19:51:47.619091 kubelet[2512]: E0213 19:51:47.619027 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:49.618819 kubelet[2512]: E0213 19:51:49.618769 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:50.618475 kubelet[2512]: E0213 19:51:50.618424 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:52.065857 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:56090.service - OpenSSH per-connection server daemon (10.0.0.1:56090). Feb 13 19:51:52.100582 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 56090 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:52.102553 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:52.107452 systemd-logind[1442]: New session 28 of user core. Feb 13 19:51:52.113720 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:51:52.245067 sshd[6069]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:52.250065 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:56090.service: Deactivated successfully. Feb 13 19:51:52.251960 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:51:52.252802 systemd-logind[1442]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:51:52.254038 systemd-logind[1442]: Removed session 28. Feb 13 19:51:57.257085 systemd[1]: Started sshd@28-10.0.0.13:22-10.0.0.1:56092.service - OpenSSH per-connection server daemon (10.0.0.1:56092). Feb 13 19:51:57.296081 sshd[6084]: Accepted publickey for core from 10.0.0.1 port 56092 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:51:57.297858 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:57.302475 systemd-logind[1442]: New session 29 of user core. Feb 13 19:51:57.307820 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:51:57.445737 sshd[6084]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:57.450375 systemd[1]: sshd@28-10.0.0.13:22-10.0.0.1:56092.service: Deactivated successfully. Feb 13 19:51:57.452467 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:51:57.453325 systemd-logind[1442]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:51:57.454456 systemd-logind[1442]: Removed session 29. Feb 13 19:52:00.876492 kubelet[2512]: E0213 19:52:00.876380 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:02.464140 systemd[1]: Started sshd@29-10.0.0.13:22-10.0.0.1:46520.service - OpenSSH per-connection server daemon (10.0.0.1:46520). Feb 13 19:52:02.504164 sshd[6146]: Accepted publickey for core from 10.0.0.1 port 46520 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:52:02.506866 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:02.513316 systemd-logind[1442]: New session 30 of user core. Feb 13 19:52:02.525872 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:52:02.665071 sshd[6146]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:02.670067 systemd[1]: sshd@29-10.0.0.13:22-10.0.0.1:46520.service: Deactivated successfully. Feb 13 19:52:02.672209 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:52:02.673064 systemd-logind[1442]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:52:02.674250 systemd-logind[1442]: Removed session 30. Feb 13 19:52:07.677780 systemd[1]: Started sshd@30-10.0.0.13:22-10.0.0.1:46526.service - OpenSSH per-connection server daemon (10.0.0.1:46526). Feb 13 19:52:07.719341 sshd[6160]: Accepted publickey for core from 10.0.0.1 port 46526 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:52:07.721168 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:07.726626 systemd-logind[1442]: New session 31 of user core. Feb 13 19:52:07.739778 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 19:52:07.878584 sshd[6160]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:07.882874 systemd[1]: sshd@30-10.0.0.13:22-10.0.0.1:46526.service: Deactivated successfully. Feb 13 19:52:07.885109 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 19:52:07.885972 systemd-logind[1442]: Session 31 logged out. Waiting for processes to exit. Feb 13 19:52:07.887002 systemd-logind[1442]: Removed session 31.