Feb 13 20:04:42.885339 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:04:42.886548 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:04:42.886561 kernel: BIOS-provided physical RAM map: Feb 13 20:04:42.886567 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 20:04:42.886573 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 20:04:42.886580 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 20:04:42.886587 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 20:04:42.886593 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 20:04:42.886599 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 13 20:04:42.886606 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 13 20:04:42.886620 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 13 20:04:42.886627 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Feb 13 20:04:42.886633 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Feb 13 20:04:42.886639 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Feb 13 20:04:42.886647 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 13 20:04:42.886654 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 20:04:42.886663 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 13 20:04:42.886670 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 13 20:04:42.886677 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 20:04:42.886683 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 20:04:42.886690 kernel: NX (Execute Disable) protection: active Feb 13 20:04:42.886697 kernel: APIC: Static calls initialized Feb 13 20:04:42.886704 kernel: efi: EFI v2.7 by EDK II Feb 13 20:04:42.886710 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Feb 13 20:04:42.886717 kernel: SMBIOS 2.8 present. Feb 13 20:04:42.886724 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Feb 13 20:04:42.886731 kernel: Hypervisor detected: KVM Feb 13 20:04:42.886740 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:04:42.886747 kernel: kvm-clock: using sched offset of 3913047363 cycles Feb 13 20:04:42.886754 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:04:42.886761 kernel: tsc: Detected 2794.748 MHz processor Feb 13 20:04:42.886768 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:04:42.886776 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:04:42.886796 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 13 20:04:42.886804 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 20:04:42.886811 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:04:42.886820 kernel: Using GB pages for direct mapping Feb 13 20:04:42.886827 kernel: Secure boot disabled Feb 13 20:04:42.886834 kernel: ACPI: Early table checksum verification disabled Feb 13 20:04:42.886841 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 20:04:42.886851 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:04:42.886859 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:04:42.886866 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:04:42.886876 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 20:04:42.886883 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:04:42.886890 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:04:42.886897 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:04:42.886905 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:04:42.886912 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 20:04:42.886919 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 20:04:42.886929 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 20:04:42.886936 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 20:04:42.886943 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 20:04:42.886950 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 20:04:42.886957 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 20:04:42.886964 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 20:04:42.886972 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 20:04:42.886979 kernel: No NUMA configuration found Feb 13 20:04:42.886986 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 13 20:04:42.886996 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 13 20:04:42.887004 kernel: Zone ranges: Feb 13 20:04:42.887011 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:04:42.887018 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 13 20:04:42.887025 kernel: Normal empty Feb 13 20:04:42.887033 kernel: Movable zone start for each node Feb 13 20:04:42.887040 kernel: Early memory node ranges Feb 13 20:04:42.887047 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 20:04:42.887054 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 20:04:42.887061 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 20:04:42.887071 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 13 20:04:42.887078 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 13 20:04:42.887085 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 13 20:04:42.887092 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 13 20:04:42.887100 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:04:42.887107 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 20:04:42.887114 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 20:04:42.887121 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:04:42.887128 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 13 20:04:42.887138 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 13 20:04:42.887145 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 13 20:04:42.887152 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:04:42.887160 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:04:42.887167 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:04:42.887174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:04:42.887181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:04:42.887189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:04:42.887196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:04:42.887203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:04:42.887212 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:04:42.887219 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:04:42.887226 kernel: TSC deadline timer available Feb 13 20:04:42.887234 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 20:04:42.887241 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:04:42.887248 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 20:04:42.887255 kernel: kvm-guest: setup PV sched yield Feb 13 20:04:42.887262 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 20:04:42.887269 kernel: Booting paravirtualized kernel on KVM Feb 13 20:04:42.887279 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:04:42.887287 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 20:04:42.887294 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 20:04:42.887301 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 20:04:42.887308 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 20:04:42.887315 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:04:42.887322 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:04:42.887331 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:04:42.887341 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:04:42.887348 kernel: random: crng init done Feb 13 20:04:42.887355 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:04:42.887363 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:04:42.887370 kernel: Fallback order for Node 0: 0 Feb 13 20:04:42.887378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 13 20:04:42.887385 kernel: Policy zone: DMA32 Feb 13 20:04:42.887392 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:04:42.887400 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 171124K reserved, 0K cma-reserved) Feb 13 20:04:42.887410 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:04:42.887417 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:04:42.887424 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:04:42.887432 kernel: Dynamic Preempt: voluntary Feb 13 20:04:42.887446 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:04:42.887457 kernel: rcu: RCU event tracing is enabled. Feb 13 20:04:42.887464 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:04:42.887472 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:04:42.887480 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:04:42.887488 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:04:42.887495 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:04:42.887503 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:04:42.887513 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 20:04:42.887520 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:04:42.887528 kernel: Console: colour dummy device 80x25 Feb 13 20:04:42.887536 kernel: printk: console [ttyS0] enabled Feb 13 20:04:42.887543 kernel: ACPI: Core revision 20230628 Feb 13 20:04:42.887554 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:04:42.887561 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:04:42.887569 kernel: x2apic enabled Feb 13 20:04:42.887577 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:04:42.887584 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 20:04:42.887592 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 20:04:42.887600 kernel: kvm-guest: setup PV IPIs Feb 13 20:04:42.887612 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:04:42.887620 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 20:04:42.887630 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 20:04:42.887638 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 20:04:42.887645 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 20:04:42.887653 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 20:04:42.887661 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:04:42.887668 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:04:42.887676 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:04:42.887684 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:04:42.887692 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 20:04:42.887702 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 20:04:42.887709 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:04:42.887717 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:04:42.887725 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 20:04:42.887733 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 20:04:42.887741 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 20:04:42.887749 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:04:42.887757 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:04:42.887766 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:04:42.887774 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:04:42.887782 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 20:04:42.887802 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:04:42.887818 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:04:42.887840 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:04:42.887847 kernel: landlock: Up and running. Feb 13 20:04:42.887855 kernel: SELinux: Initializing. Feb 13 20:04:42.887863 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:04:42.887873 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:04:42.887881 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 20:04:42.887889 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:04:42.887897 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:04:42.887905 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:04:42.887912 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 20:04:42.887920 kernel: ... version: 0 Feb 13 20:04:42.887927 kernel: ... bit width: 48 Feb 13 20:04:42.887935 kernel: ... generic registers: 6 Feb 13 20:04:42.887945 kernel: ... value mask: 0000ffffffffffff Feb 13 20:04:42.887952 kernel: ... max period: 00007fffffffffff Feb 13 20:04:42.887960 kernel: ... fixed-purpose events: 0 Feb 13 20:04:42.887968 kernel: ... event mask: 000000000000003f Feb 13 20:04:42.887975 kernel: signal: max sigframe size: 1776 Feb 13 20:04:42.887983 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:04:42.887990 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:04:42.887998 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:04:42.888006 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:04:42.888016 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 20:04:42.888023 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:04:42.888031 kernel: smpboot: Max logical packages: 1 Feb 13 20:04:42.888038 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 20:04:42.888046 kernel: devtmpfs: initialized Feb 13 20:04:42.888054 kernel: x86/mm: Memory block size: 128MB Feb 13 20:04:42.888061 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 20:04:42.888069 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 20:04:42.888076 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 13 20:04:42.888087 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 20:04:42.888094 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 20:04:42.888102 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:04:42.888110 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:04:42.888118 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:04:42.888125 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:04:42.888133 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:04:42.888141 kernel: audit: type=2000 audit(1739477082.170:1): state=initialized audit_enabled=0 res=1 Feb 13 20:04:42.888148 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:04:42.888158 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:04:42.888166 kernel: cpuidle: using governor menu Feb 13 20:04:42.888173 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:04:42.888181 kernel: dca service started, version 1.12.1 Feb 13 20:04:42.888189 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 20:04:42.888196 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 20:04:42.888204 kernel: PCI: Using configuration type 1 for base access Feb 13 20:04:42.888212 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:04:42.888219 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:04:42.888229 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:04:42.888237 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:04:42.888245 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:04:42.888252 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:04:42.888260 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:04:42.888267 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:04:42.888275 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:04:42.888283 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:04:42.888290 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:04:42.888300 kernel: ACPI: Interpreter enabled Feb 13 20:04:42.888308 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:04:42.888315 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:04:42.888323 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:04:42.888330 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:04:42.888338 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 20:04:42.888346 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:04:42.888520 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:04:42.888658 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 20:04:42.888778 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 20:04:42.888801 kernel: PCI host bridge to bus 0000:00 Feb 13 20:04:42.888930 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:04:42.889043 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:04:42.889154 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:04:42.889263 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 20:04:42.889379 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 20:04:42.889489 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Feb 13 20:04:42.889599 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:04:42.889748 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 20:04:42.889897 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 20:04:42.890019 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 20:04:42.890144 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 20:04:42.890262 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 20:04:42.890381 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 20:04:42.890501 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:04:42.890649 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:04:42.890772 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 20:04:42.890912 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 20:04:42.891037 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 13 20:04:42.891168 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:04:42.891290 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 20:04:42.891409 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 20:04:42.891529 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 13 20:04:42.891666 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:04:42.891799 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 20:04:42.891928 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 20:04:42.893122 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 13 20:04:42.893264 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 20:04:42.893395 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 20:04:42.893515 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 20:04:42.893666 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 20:04:42.893814 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 20:04:42.893934 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 20:04:42.894062 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 20:04:42.894181 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 20:04:42.894191 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:04:42.894199 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:04:42.894207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:04:42.894214 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:04:42.894226 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 20:04:42.894234 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 20:04:42.894241 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 20:04:42.894249 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 20:04:42.894256 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 20:04:42.894264 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 20:04:42.894271 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 20:04:42.894279 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 20:04:42.894286 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 20:04:42.894296 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 20:04:42.894304 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 20:04:42.894312 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 20:04:42.894319 kernel: iommu: Default domain type: Translated Feb 13 20:04:42.894327 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:04:42.894334 kernel: efivars: Registered efivars operations Feb 13 20:04:42.894342 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:04:42.894350 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:04:42.894357 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 20:04:42.894367 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 13 20:04:42.894375 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 13 20:04:42.894382 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 13 20:04:42.894502 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 20:04:42.894630 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 20:04:42.894752 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:04:42.894763 kernel: vgaarb: loaded Feb 13 20:04:42.894771 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:04:42.894779 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:04:42.894802 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:04:42.894810 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:04:42.894817 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:04:42.894825 kernel: pnp: PnP ACPI init Feb 13 20:04:42.894963 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 20:04:42.894974 kernel: pnp: PnP ACPI: found 6 devices Feb 13 20:04:42.894982 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:04:42.894990 kernel: NET: Registered PF_INET protocol family Feb 13 20:04:42.895001 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:04:42.895009 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:04:42.895017 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:04:42.895024 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:04:42.895032 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:04:42.895040 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:04:42.895047 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:04:42.895055 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:04:42.895063 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:04:42.895073 kernel: NET: Registered PF_XDP protocol family Feb 13 20:04:42.895194 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 20:04:42.895316 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 20:04:42.895429 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:04:42.895540 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:04:42.895660 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:04:42.895771 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 20:04:42.895895 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 20:04:42.896011 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Feb 13 20:04:42.896021 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:04:42.896028 kernel: Initialise system trusted keyrings Feb 13 20:04:42.896036 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:04:42.896044 kernel: Key type asymmetric registered Feb 13 20:04:42.896051 kernel: Asymmetric key parser 'x509' registered Feb 13 20:04:42.896059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:04:42.896066 kernel: io scheduler mq-deadline registered Feb 13 20:04:42.896077 kernel: io scheduler kyber registered Feb 13 20:04:42.896084 kernel: io scheduler bfq registered Feb 13 20:04:42.896092 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:04:42.896100 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 20:04:42.896107 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 20:04:42.896115 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 20:04:42.896123 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:04:42.896130 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:04:42.896138 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:04:42.896146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:04:42.896155 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:04:42.896286 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 20:04:42.896402 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 20:04:42.896412 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:04:42.896524 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T20:04:42 UTC (1739477082) Feb 13 20:04:42.896645 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 20:04:42.896656 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 20:04:42.896668 kernel: efifb: probing for efifb Feb 13 20:04:42.896675 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Feb 13 20:04:42.896683 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Feb 13 20:04:42.896691 kernel: efifb: scrolling: redraw Feb 13 20:04:42.896699 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Feb 13 20:04:42.896707 kernel: Console: switching to colour frame buffer device 100x37 Feb 13 20:04:42.896732 kernel: fb0: EFI VGA frame buffer device Feb 13 20:04:42.896742 kernel: pstore: Using crash dump compression: deflate Feb 13 20:04:42.896750 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:04:42.896761 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:04:42.896769 kernel: Segment Routing with IPv6 Feb 13 20:04:42.896776 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:04:42.896796 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:04:42.896804 kernel: Key type dns_resolver registered Feb 13 20:04:42.896811 kernel: IPI shorthand broadcast: enabled Feb 13 20:04:42.896820 kernel: sched_clock: Marking stable (548002804, 112846039)->(707632383, -46783540) Feb 13 20:04:42.896828 kernel: registered taskstats version 1 Feb 13 20:04:42.896836 kernel: Loading compiled-in X.509 certificates Feb 13 20:04:42.896844 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:04:42.896855 kernel: Key type .fscrypt registered Feb 13 20:04:42.896862 kernel: Key type fscrypt-provisioning registered Feb 13 20:04:42.896870 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:04:42.896878 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:04:42.896886 kernel: ima: No architecture policies found Feb 13 20:04:42.896894 kernel: clk: Disabling unused clocks Feb 13 20:04:42.896902 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:04:42.896910 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:04:42.896921 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:04:42.896929 kernel: Run /init as init process Feb 13 20:04:42.896937 kernel: with arguments: Feb 13 20:04:42.896945 kernel: /init Feb 13 20:04:42.896952 kernel: with environment: Feb 13 20:04:42.896962 kernel: HOME=/ Feb 13 20:04:42.896970 kernel: TERM=linux Feb 13 20:04:42.896978 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:04:42.896988 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:04:42.897000 systemd[1]: Detected virtualization kvm. Feb 13 20:04:42.897009 systemd[1]: Detected architecture x86-64. Feb 13 20:04:42.897017 systemd[1]: Running in initrd. Feb 13 20:04:42.897028 systemd[1]: No hostname configured, using default hostname. Feb 13 20:04:42.897038 systemd[1]: Hostname set to . Feb 13 20:04:42.897047 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:04:42.897055 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:04:42.897063 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:04:42.897071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:04:42.897080 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:04:42.897089 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:04:42.897097 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:04:42.897108 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:04:42.897118 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:04:42.897127 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:04:42.897135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:04:42.897144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:04:42.897152 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:04:42.897160 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:04:42.897170 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:04:42.897178 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:04:42.897187 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:04:42.897195 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:04:42.897203 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:04:42.897212 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:04:42.897220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:04:42.897228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:04:42.897239 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:04:42.897247 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:04:42.897256 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:04:42.897264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:04:42.897272 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:04:42.897280 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:04:42.897289 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:04:42.897297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:04:42.897305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:04:42.897316 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:04:42.897324 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:04:42.897332 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:04:42.897342 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:04:42.897352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:04:42.897378 systemd-journald[192]: Collecting audit messages is disabled. Feb 13 20:04:42.897397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:04:42.897406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:04:42.897417 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:04:42.897425 systemd-journald[192]: Journal started Feb 13 20:04:42.897443 systemd-journald[192]: Runtime Journal (/run/log/journal/627aa4180fc5446f959b6746f634b59b) is 6.0M, max 48.3M, 42.2M free. Feb 13 20:04:42.877485 systemd-modules-load[193]: Inserted module 'overlay' Feb 13 20:04:42.901855 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:04:42.899840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:04:42.904681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:04:42.909841 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:04:42.912065 systemd-modules-load[193]: Inserted module 'br_netfilter' Feb 13 20:04:42.913008 kernel: Bridge firewalling registered Feb 13 20:04:42.913183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:04:42.920924 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:04:42.921223 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:04:42.924383 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:04:42.928000 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:04:42.937858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:04:42.941311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:04:42.944903 dracut-cmdline[225]: dracut-dracut-053 Feb 13 20:04:42.947899 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:04:42.989733 systemd-resolved[233]: Positive Trust Anchors: Feb 13 20:04:42.989747 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:04:42.989778 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:04:42.992276 systemd-resolved[233]: Defaulting to hostname 'linux'. Feb 13 20:04:42.993313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:04:42.999302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:04:43.043820 kernel: SCSI subsystem initialized Feb 13 20:04:43.052809 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:04:43.062814 kernel: iscsi: registered transport (tcp) Feb 13 20:04:43.083813 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:04:43.083828 kernel: QLogic iSCSI HBA Driver Feb 13 20:04:43.133377 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:04:43.140920 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:04:43.164863 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:04:43.164896 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:04:43.165899 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:04:43.206810 kernel: raid6: avx2x4 gen() 30221 MB/s Feb 13 20:04:43.223806 kernel: raid6: avx2x2 gen() 30834 MB/s Feb 13 20:04:43.240871 kernel: raid6: avx2x1 gen() 25842 MB/s Feb 13 20:04:43.240885 kernel: raid6: using algorithm avx2x2 gen() 30834 MB/s Feb 13 20:04:43.258878 kernel: raid6: .... xor() 19858 MB/s, rmw enabled Feb 13 20:04:43.258893 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:04:43.278813 kernel: xor: automatically using best checksumming function avx Feb 13 20:04:43.429815 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:04:43.443006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:04:43.452910 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:04:43.465518 systemd-udevd[413]: Using default interface naming scheme 'v255'. Feb 13 20:04:43.469831 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:04:43.482959 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:04:43.496306 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Feb 13 20:04:43.530387 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:04:43.542944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:04:43.608320 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:04:43.617936 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:04:43.631853 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:04:43.634872 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:04:43.637508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:04:43.639996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:04:43.643833 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 20:04:43.661232 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:04:43.661378 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:04:43.661390 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:04:43.661400 kernel: GPT:9289727 != 19775487 Feb 13 20:04:43.661410 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:04:43.661420 kernel: GPT:9289727 != 19775487 Feb 13 20:04:43.661431 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:04:43.661441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:04:43.654221 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:04:43.668802 kernel: libata version 3.00 loaded. Feb 13 20:04:43.672943 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:04:43.680817 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:04:43.680846 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 20:04:43.701608 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 20:04:43.701633 kernel: AES CTR mode by8 optimization enabled Feb 13 20:04:43.701644 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 20:04:43.701809 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 20:04:43.701956 kernel: scsi host0: ahci Feb 13 20:04:43.702112 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (463) Feb 13 20:04:43.702124 kernel: scsi host1: ahci Feb 13 20:04:43.702279 kernel: scsi host2: ahci Feb 13 20:04:43.702428 kernel: scsi host3: ahci Feb 13 20:04:43.702572 kernel: scsi host4: ahci Feb 13 20:04:43.702729 kernel: scsi host5: ahci Feb 13 20:04:43.703075 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 20:04:43.703088 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 20:04:43.703098 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 20:04:43.703108 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 20:04:43.703123 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Feb 13 20:04:43.703134 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 20:04:43.703145 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 20:04:43.683699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:04:43.683814 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:04:43.690931 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:04:43.693373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:04:43.693490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:04:43.702906 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:04:43.709614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:04:43.727165 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:04:43.730493 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:04:43.731006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:04:43.746386 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:04:43.752411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:04:43.757968 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:04:43.771963 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:04:43.774279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:04:43.774339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:04:43.776806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:04:43.780387 disk-uuid[555]: Primary Header is updated. Feb 13 20:04:43.780387 disk-uuid[555]: Secondary Entries is updated. Feb 13 20:04:43.780387 disk-uuid[555]: Secondary Header is updated. Feb 13 20:04:43.783055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:04:43.786063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:04:43.789811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:04:43.803894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:04:43.815036 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:04:43.835691 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:04:44.014697 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 20:04:44.014752 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 20:04:44.014763 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 20:04:44.014775 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 20:04:44.015819 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 20:04:44.016813 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 20:04:44.017815 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 20:04:44.017832 kernel: ata3.00: applying bridge limits Feb 13 20:04:44.018807 kernel: ata3.00: configured for UDMA/100 Feb 13 20:04:44.019817 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 20:04:44.074816 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 20:04:44.089414 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:04:44.089430 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 20:04:44.790812 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:04:44.791106 disk-uuid[556]: The operation has completed successfully. Feb 13 20:04:44.819555 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:04:44.819686 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:04:44.843928 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:04:44.849246 sh[597]: Success Feb 13 20:04:44.860806 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 20:04:44.892469 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:04:44.913167 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:04:44.915962 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:04:44.926302 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:04:44.926325 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:04:44.926337 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:04:44.927303 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:04:44.928014 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:04:44.932282 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:04:44.932974 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:04:44.933821 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:04:44.935433 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:04:44.949435 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:04:44.949461 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:04:44.949472 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:04:44.952831 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:04:44.961142 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:04:44.962837 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:04:44.971681 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:04:44.979005 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:04:45.026152 ignition[695]: Ignition 2.19.0 Feb 13 20:04:45.026163 ignition[695]: Stage: fetch-offline Feb 13 20:04:45.026208 ignition[695]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:04:45.026219 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:04:45.026324 ignition[695]: parsed url from cmdline: "" Feb 13 20:04:45.026328 ignition[695]: no config URL provided Feb 13 20:04:45.026334 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:04:45.026344 ignition[695]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:04:45.026370 ignition[695]: op(1): [started] loading QEMU firmware config module Feb 13 20:04:45.026375 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:04:45.033926 ignition[695]: op(1): [finished] loading QEMU firmware config module Feb 13 20:04:45.052674 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:04:45.062918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:04:45.073814 ignition[695]: parsing config with SHA512: 79e54c5fdb1b9434f70e3d89abf909a74a6d4eecb7a69ed1f67a8458403d21633455cd597ba334981c4c5754e0dee440d8787179719086aab2eb22c7efdd1188 Feb 13 20:04:45.078504 unknown[695]: fetched base config from "system" Feb 13 20:04:45.078632 unknown[695]: fetched user config from "qemu" Feb 13 20:04:45.079902 ignition[695]: fetch-offline: fetch-offline passed Feb 13 20:04:45.079996 ignition[695]: Ignition finished successfully Feb 13 20:04:45.084462 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:04:45.085395 systemd-networkd[786]: lo: Link UP Feb 13 20:04:45.085406 systemd-networkd[786]: lo: Gained carrier Feb 13 20:04:45.086950 systemd-networkd[786]: Enumeration completed Feb 13 20:04:45.087018 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:04:45.087324 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:04:45.087329 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:04:45.087602 systemd[1]: Reached target network.target - Network. Feb 13 20:04:45.089119 systemd-networkd[786]: eth0: Link UP Feb 13 20:04:45.089123 systemd-networkd[786]: eth0: Gained carrier Feb 13 20:04:45.089130 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:04:45.090509 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:04:45.097913 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:04:45.102855 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.159/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:04:45.111421 ignition[789]: Ignition 2.19.0 Feb 13 20:04:45.111432 ignition[789]: Stage: kargs Feb 13 20:04:45.111602 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:04:45.111613 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:04:45.112397 ignition[789]: kargs: kargs passed Feb 13 20:04:45.115456 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:04:45.112437 ignition[789]: Ignition finished successfully Feb 13 20:04:45.126909 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:04:45.138839 ignition[798]: Ignition 2.19.0 Feb 13 20:04:45.138851 ignition[798]: Stage: disks Feb 13 20:04:45.139024 ignition[798]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:04:45.139035 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:04:45.139890 ignition[798]: disks: disks passed Feb 13 20:04:45.141941 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:04:45.139943 ignition[798]: Ignition finished successfully Feb 13 20:04:45.143374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:04:45.144965 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:04:45.147109 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:04:45.148116 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:04:45.149881 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:04:45.158915 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:04:45.169992 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:04:45.176126 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:04:45.181936 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:04:45.264805 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:04:45.265693 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:04:45.267108 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:04:45.277867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:04:45.279415 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:04:45.280736 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:04:45.280771 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:04:45.291495 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Feb 13 20:04:45.291511 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:04:45.291522 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:04:45.291532 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:04:45.280807 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:04:45.294680 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:04:45.287306 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:04:45.292371 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:04:45.296433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:04:45.327408 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:04:45.332155 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:04:45.336010 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:04:45.339741 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:04:45.422539 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:04:45.435880 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:04:45.439033 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:04:45.443806 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:04:45.462598 ignition[930]: INFO : Ignition 2.19.0 Feb 13 20:04:45.462598 ignition[930]: INFO : Stage: mount Feb 13 20:04:45.464235 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:04:45.464235 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:04:45.464235 ignition[930]: INFO : mount: mount passed Feb 13 20:04:45.464235 ignition[930]: INFO : Ignition finished successfully Feb 13 20:04:45.463689 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:04:45.467003 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:04:45.477886 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:04:45.925867 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:04:45.942918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:04:45.948812 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (946) Feb 13 20:04:45.951390 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:04:45.951410 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:04:45.951421 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:04:45.953809 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:04:45.955292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:04:45.986853 ignition[963]: INFO : Ignition 2.19.0 Feb 13 20:04:45.986853 ignition[963]: INFO : Stage: files Feb 13 20:04:45.988451 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:04:45.988451 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:04:45.991173 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:04:45.992404 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:04:45.992404 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:04:45.995975 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:04:45.997473 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:04:45.997473 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:04:45.996697 unknown[963]: wrote ssh authorized keys file for user: core Feb 13 20:04:46.001434 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:04:46.001434 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:04:46.038140 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:04:46.163874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:04:46.165993 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:04:46.390902 systemd-networkd[786]: eth0: Gained IPv6LL Feb 13 20:04:46.514393 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:04:46.879412 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:04:46.879412 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:04:46.883103 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:04:46.885273 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:04:46.885273 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:04:46.885273 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:04:46.889576 ignition[963]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:04:46.891488 ignition[963]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:04:46.891488 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:04:46.894580 ignition[963]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:04:46.915127 ignition[963]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:04:46.920793 ignition[963]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:04:46.922418 ignition[963]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:04:46.922418 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:04:46.925260 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:04:46.926694 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:04:46.928435 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:04:46.930097 ignition[963]: INFO : files: files passed Feb 13 20:04:46.930851 ignition[963]: INFO : Ignition finished successfully Feb 13 20:04:46.934318 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:04:46.945915 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:04:46.947642 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:04:46.949611 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:04:46.949735 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:04:46.958075 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:04:46.960821 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:04:46.960821 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:04:46.964089 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:04:46.966936 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:04:46.967204 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:04:46.980924 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:04:47.006398 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:04:47.007445 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:04:47.010062 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:04:47.012131 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:04:47.014199 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:04:47.026917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:04:47.042271 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:04:47.060909 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:04:47.071871 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:04:47.074218 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:04:47.076573 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:04:47.078411 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:04:47.079409 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:04:47.081948 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:04:47.084029 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:04:47.085864 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:04:47.088085 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:04:47.090383 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:04:47.092650 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:04:47.094721 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:04:47.097197 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:04:47.099302 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:04:47.101351 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:04:47.102985 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:04:47.103990 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:04:47.106257 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:04:47.108435 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:04:47.110806 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:04:47.111861 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:04:47.114428 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:04:47.115444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:04:47.117684 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:04:47.118768 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:04:47.121130 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:04:47.122899 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:04:47.123982 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:04:47.126679 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:04:47.128593 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:04:47.130467 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:04:47.131340 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:04:47.133305 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:04:47.134203 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:04:47.136280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:04:47.137457 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:04:47.139981 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:04:47.140972 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:04:47.153938 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:04:47.156462 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:04:47.158426 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:04:47.159556 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:04:47.161924 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:04:47.163005 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:04:47.166738 ignition[1017]: INFO : Ignition 2.19.0 Feb 13 20:04:47.166738 ignition[1017]: INFO : Stage: umount Feb 13 20:04:47.166738 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:04:47.166738 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:04:47.173825 ignition[1017]: INFO : umount: umount passed Feb 13 20:04:47.173825 ignition[1017]: INFO : Ignition finished successfully Feb 13 20:04:47.169472 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:04:47.169605 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:04:47.171941 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:04:47.172055 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:04:47.174416 systemd[1]: Stopped target network.target - Network. Feb 13 20:04:47.175532 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:04:47.175592 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:04:47.177443 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:04:47.177489 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:04:47.179422 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:04:47.179466 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:04:47.181689 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:04:47.181735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:04:47.184093 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:04:47.186011 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:04:47.188845 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:04:47.192108 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:04:47.192236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:04:47.194345 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:04:47.194406 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:04:47.194867 systemd-networkd[786]: eth0: DHCPv6 lease lost Feb 13 20:04:47.196936 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:04:47.197062 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:04:47.200198 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:04:47.200272 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:04:47.213871 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:04:47.213955 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:04:47.214012 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:04:47.214351 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:04:47.214395 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:04:47.214674 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:04:47.214715 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:04:47.215090 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:04:47.223228 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:04:47.223355 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:04:47.233549 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:04:47.233733 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:04:47.235997 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:04:47.236046 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:04:47.238085 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:04:47.238123 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:04:47.240099 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:04:47.240145 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:04:47.242225 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:04:47.242273 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:04:47.244291 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:04:47.244340 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:04:47.256947 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:04:47.258064 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:04:47.258121 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:04:47.260702 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:04:47.260751 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:04:47.263007 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:04:47.263058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:04:47.265474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:04:47.265528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:04:47.268043 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:04:47.268151 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:04:47.373409 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:04:47.373569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:04:47.376022 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:04:47.377380 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:04:47.377442 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:04:47.387993 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:04:47.395447 systemd[1]: Switching root. Feb 13 20:04:47.428513 systemd-journald[192]: Journal stopped Feb 13 20:04:48.595439 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Feb 13 20:04:48.595535 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:04:48.595549 kernel: SELinux: policy capability open_perms=1 Feb 13 20:04:48.595560 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:04:48.595571 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:04:48.595582 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:04:48.595593 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:04:48.595604 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:04:48.595624 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:04:48.595635 kernel: audit: type=1403 audit(1739477087.873:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:04:48.595647 systemd[1]: Successfully loaded SELinux policy in 40.441ms. Feb 13 20:04:48.595673 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.224ms. Feb 13 20:04:48.595686 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:04:48.595698 systemd[1]: Detected virtualization kvm. Feb 13 20:04:48.595710 systemd[1]: Detected architecture x86-64. Feb 13 20:04:48.595721 systemd[1]: Detected first boot. Feb 13 20:04:48.595735 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:04:48.595753 zram_generator::config[1061]: No configuration found. Feb 13 20:04:48.595767 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:04:48.595779 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:04:48.595802 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:04:48.595815 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:04:48.595828 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:04:48.595840 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:04:48.595851 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:04:48.595866 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:04:48.595878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:04:48.595890 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:04:48.595902 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:04:48.595913 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:04:48.595929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:04:48.595941 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:04:48.595954 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:04:48.595968 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:04:48.595980 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:04:48.595996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:04:48.596011 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:04:48.596032 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:04:48.596057 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:04:48.596080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:04:48.596092 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:04:48.596106 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:04:48.596119 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:04:48.596130 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:04:48.596142 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:04:48.596154 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:04:48.596165 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:04:48.596177 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:04:48.596189 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:04:48.596201 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:04:48.596215 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:04:48.596226 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:04:48.596238 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:04:48.596250 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:04:48.596262 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:04:48.596274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:04:48.596286 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:04:48.596297 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:04:48.596309 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:04:48.596324 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:04:48.596336 systemd[1]: Reached target machines.target - Containers. Feb 13 20:04:48.596347 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:04:48.596359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:04:48.596373 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:04:48.596385 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:04:48.596396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:04:48.596408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:04:48.596422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:04:48.596434 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:04:48.596446 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:04:48.596465 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:04:48.596477 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:04:48.596489 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:04:48.596501 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:04:48.596513 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:04:48.596525 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:04:48.596539 kernel: fuse: init (API version 7.39) Feb 13 20:04:48.596550 kernel: loop: module loaded Feb 13 20:04:48.596562 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:04:48.596574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:04:48.596588 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:04:48.596601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:04:48.596614 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:04:48.596626 systemd[1]: Stopped verity-setup.service. Feb 13 20:04:48.596640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:04:48.596655 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:04:48.596667 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:04:48.596678 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:04:48.596707 systemd-journald[1135]: Collecting audit messages is disabled. Feb 13 20:04:48.596731 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:04:48.596743 systemd-journald[1135]: Journal started Feb 13 20:04:48.596765 systemd-journald[1135]: Runtime Journal (/run/log/journal/627aa4180fc5446f959b6746f634b59b) is 6.0M, max 48.3M, 42.2M free. Feb 13 20:04:48.379382 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:04:48.399073 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:04:48.399524 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:04:48.599208 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:04:48.600051 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:04:48.600863 kernel: ACPI: bus type drm_connector registered Feb 13 20:04:48.601670 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:04:48.603183 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:04:48.604676 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:04:48.606240 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:04:48.606416 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:04:48.607942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:04:48.608118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:04:48.609575 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:04:48.609755 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:04:48.611353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:04:48.611530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:04:48.613082 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:04:48.613258 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:04:48.614908 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:04:48.615080 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:04:48.616587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:04:48.618051 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:04:48.619609 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:04:48.635121 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:04:48.645948 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:04:48.648407 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:04:48.649543 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:04:48.649635 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:04:48.651667 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:04:48.653960 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:04:48.659826 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:04:48.660988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:04:48.663096 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:04:48.667229 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:04:48.668629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:04:48.669854 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:04:48.671148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:04:48.672684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:04:48.679751 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:04:48.685879 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:04:48.688935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:04:48.690494 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:04:48.691905 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:04:48.697867 kernel: loop0: detected capacity change from 0 to 205544 Feb 13 20:04:48.698009 systemd-journald[1135]: Time spent on flushing to /var/log/journal/627aa4180fc5446f959b6746f634b59b is 19.168ms for 1001 entries. Feb 13 20:04:48.698009 systemd-journald[1135]: System Journal (/var/log/journal/627aa4180fc5446f959b6746f634b59b) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:04:48.739964 systemd-journald[1135]: Received client request to flush runtime journal. Feb 13 20:04:48.740018 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:04:48.698238 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:04:48.712058 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:04:48.713611 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:04:48.716159 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:04:48.719894 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:04:48.723068 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:04:48.730231 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 20:04:48.730244 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 20:04:48.734608 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:04:48.736450 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:04:48.746057 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:04:48.747657 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:04:48.755356 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:04:48.756034 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:04:48.766817 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 20:04:48.776912 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:04:48.785108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:04:48.803811 kernel: loop2: detected capacity change from 0 to 142488 Feb 13 20:04:48.801531 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 20:04:48.801554 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 20:04:48.806928 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:04:48.840824 kernel: loop3: detected capacity change from 0 to 205544 Feb 13 20:04:48.847822 kernel: loop4: detected capacity change from 0 to 140768 Feb 13 20:04:48.857986 kernel: loop5: detected capacity change from 0 to 142488 Feb 13 20:04:48.868603 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:04:48.869203 (sd-merge)[1205]: Merged extensions into '/usr'. Feb 13 20:04:48.873075 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:04:48.873091 systemd[1]: Reloading... Feb 13 20:04:48.928851 zram_generator::config[1230]: No configuration found. Feb 13 20:04:48.991730 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:04:49.048108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:04:49.097288 systemd[1]: Reloading finished in 223 ms. Feb 13 20:04:49.134238 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:04:49.135930 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:04:49.161010 systemd[1]: Starting ensure-sysext.service... Feb 13 20:04:49.163156 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:04:49.171122 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:04:49.171138 systemd[1]: Reloading... Feb 13 20:04:49.189393 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:04:49.189952 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:04:49.191216 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:04:49.191624 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Feb 13 20:04:49.191728 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Feb 13 20:04:49.223611 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:04:49.223629 systemd-tmpfiles[1269]: Skipping /boot Feb 13 20:04:49.224807 zram_generator::config[1299]: No configuration found. Feb 13 20:04:49.235189 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:04:49.235209 systemd-tmpfiles[1269]: Skipping /boot Feb 13 20:04:49.328718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:04:49.377729 systemd[1]: Reloading finished in 206 ms. Feb 13 20:04:49.395020 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:04:49.396657 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:04:49.418220 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:04:49.420666 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:04:49.422976 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:04:49.427993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:04:49.431393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:04:49.434287 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:04:49.438674 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:04:49.439016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:04:49.440406 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:04:49.445777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:04:49.448429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:04:49.450939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:04:49.456236 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:04:49.457505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:04:49.458718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:04:49.459206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:04:49.461521 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:04:49.464245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:04:49.464525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:04:49.466572 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:04:49.466869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:04:49.471011 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Feb 13 20:04:49.476390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:04:49.478075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:04:49.478488 augenrules[1364]: No rules Feb 13 20:04:49.483003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:04:49.486901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:04:49.489902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:04:49.491372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:04:49.493366 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:04:49.495579 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:04:49.496739 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:04:49.499320 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:04:49.501304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:04:49.501478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:04:49.503616 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:04:49.505507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:04:49.506065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:04:49.509535 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:04:49.509711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:04:49.511408 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:04:49.513038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:04:49.532737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:04:49.533046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:04:49.540976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:04:49.548000 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:04:49.554933 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:04:49.560927 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:04:49.566810 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1382) Feb 13 20:04:49.562067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:04:49.575942 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:04:49.577264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:04:49.578585 systemd[1]: Finished ensure-sysext.service. Feb 13 20:04:49.579867 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:04:49.582214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:04:49.582383 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:04:49.584204 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:04:49.585051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:04:49.587152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:04:49.587313 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:04:49.588866 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:04:49.589031 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:04:49.590220 systemd-resolved[1339]: Positive Trust Anchors: Feb 13 20:04:49.590240 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:04:49.590272 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:04:49.604586 systemd-resolved[1339]: Defaulting to hostname 'linux'. Feb 13 20:04:49.606597 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:04:49.610273 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:04:49.622358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:04:49.624180 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:04:49.634266 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:04:49.635510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:04:49.637426 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 20:04:49.635743 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:04:49.640063 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:04:49.641274 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:04:49.644812 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:04:49.650912 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 20:04:49.653668 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 20:04:49.663410 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 20:04:49.663589 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 20:04:49.664082 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 20:04:49.655062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:04:49.672389 systemd-networkd[1407]: lo: Link UP Feb 13 20:04:49.672398 systemd-networkd[1407]: lo: Gained carrier Feb 13 20:04:49.678001 systemd-networkd[1407]: Enumeration completed Feb 13 20:04:49.678120 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:04:49.678397 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:04:49.678409 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:04:49.679620 systemd-networkd[1407]: eth0: Link UP Feb 13 20:04:49.679633 systemd-networkd[1407]: eth0: Gained carrier Feb 13 20:04:49.679644 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:04:49.684707 systemd[1]: Reached target network.target - Network. Feb 13 20:04:49.686313 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:04:49.695833 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.159/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:04:49.695936 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:04:49.705942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:04:49.721852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:04:49.722134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:04:49.728333 systemd-timesyncd[1423]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:04:49.728381 systemd-timesyncd[1423]: Initial clock synchronization to Thu 2025-02-13 20:04:49.971295 UTC. Feb 13 20:04:49.775950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:04:49.777172 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:04:49.778700 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:04:49.789025 kernel: kvm_amd: TSC scaling supported Feb 13 20:04:49.789108 kernel: kvm_amd: Nested Virtualization enabled Feb 13 20:04:49.789125 kernel: kvm_amd: Nested Paging enabled Feb 13 20:04:49.789172 kernel: kvm_amd: LBR virtualization supported Feb 13 20:04:49.790072 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 20:04:49.790095 kernel: kvm_amd: Virtual GIF supported Feb 13 20:04:49.810829 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:04:49.833708 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:04:49.845061 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:04:49.853080 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:04:49.862630 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:04:49.892778 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:04:49.894289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:04:49.895403 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:04:49.896580 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:04:49.897839 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:04:49.899266 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:04:49.900444 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:04:49.901679 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:04:49.902952 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:04:49.902984 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:04:49.903881 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:04:49.905498 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:04:49.908386 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:04:49.915594 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:04:49.918125 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:04:49.919737 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:04:49.920933 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:04:49.921936 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:04:49.922984 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:04:49.923012 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:04:49.923965 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:04:49.926082 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:04:49.927995 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:04:49.929979 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:04:49.934363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:04:49.935671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:04:49.938371 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:04:49.940959 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:04:49.942275 jq[1450]: false Feb 13 20:04:49.945956 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:04:49.949008 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:04:49.955101 extend-filesystems[1451]: Found loop3 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found loop4 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found loop5 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found sr0 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found vda Feb 13 20:04:49.955101 extend-filesystems[1451]: Found vda1 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found vda2 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found vda3 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found usr Feb 13 20:04:49.955101 extend-filesystems[1451]: Found vda4 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found vda6 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found vda7 Feb 13 20:04:49.955101 extend-filesystems[1451]: Found vda9 Feb 13 20:04:49.955101 extend-filesystems[1451]: Checking size of /dev/vda9 Feb 13 20:04:49.956207 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:04:49.969760 dbus-daemon[1449]: [system] SELinux support is enabled Feb 13 20:04:49.959435 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:04:49.986157 extend-filesystems[1451]: Resized partition /dev/vda9 Feb 13 20:04:49.959849 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:04:49.963966 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:04:49.966696 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:04:49.987694 jq[1466]: true Feb 13 20:04:49.969300 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:04:49.971396 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:04:49.976350 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:04:49.976701 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:04:49.977042 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:04:49.977255 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:04:49.984328 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:04:49.984563 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:04:49.990207 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:04:49.995283 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:04:49.996729 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:04:50.004414 jq[1474]: true Feb 13 20:04:50.007870 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Feb 13 20:04:50.012376 update_engine[1465]: I20250213 20:04:50.012221 1465 main.cc:92] Flatcar Update Engine starting Feb 13 20:04:50.016830 update_engine[1465]: I20250213 20:04:50.016269 1465 update_check_scheduler.cc:74] Next update check in 3m20s Feb 13 20:04:50.022871 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:04:50.036009 tar[1472]: linux-amd64/helm Feb 13 20:04:50.040543 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:04:50.042465 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:04:50.044074 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:04:50.044093 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:04:50.045680 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:04:50.045695 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:04:50.052399 systemd-logind[1462]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:04:50.052420 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:04:50.054710 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:04:50.054710 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:04:50.054710 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:04:50.054004 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:04:50.066383 extend-filesystems[1451]: Resized filesystem in /dev/vda9 Feb 13 20:04:50.056819 systemd-logind[1462]: New seat seat0. Feb 13 20:04:50.058613 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:04:50.058886 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:04:50.066590 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:04:50.091664 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:04:50.093505 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:04:50.095715 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:04:50.096983 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:04:50.203374 containerd[1476]: time="2025-02-13T20:04:50.203281036Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:04:50.227846 containerd[1476]: time="2025-02-13T20:04:50.227729785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:04:50.229648 containerd[1476]: time="2025-02-13T20:04:50.229604429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:04:50.229648 containerd[1476]: time="2025-02-13T20:04:50.229637102Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:04:50.229733 containerd[1476]: time="2025-02-13T20:04:50.229654796Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:04:50.229851 containerd[1476]: time="2025-02-13T20:04:50.229830258Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:04:50.229875 containerd[1476]: time="2025-02-13T20:04:50.229851802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:04:50.229940 containerd[1476]: time="2025-02-13T20:04:50.229920689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:04:50.229975 containerd[1476]: time="2025-02-13T20:04:50.229940117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:04:50.230154 containerd[1476]: time="2025-02-13T20:04:50.230122982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:04:50.230154 containerd[1476]: time="2025-02-13T20:04:50.230143257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:04:50.230221 containerd[1476]: time="2025-02-13T20:04:50.230156563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:04:50.230221 containerd[1476]: time="2025-02-13T20:04:50.230167330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:04:50.230293 containerd[1476]: time="2025-02-13T20:04:50.230274092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:04:50.230538 containerd[1476]: time="2025-02-13T20:04:50.230512226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:04:50.230665 containerd[1476]: time="2025-02-13T20:04:50.230644962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:04:50.230665 containerd[1476]: time="2025-02-13T20:04:50.230662139Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:04:50.230787 containerd[1476]: time="2025-02-13T20:04:50.230769861Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:04:50.230879 containerd[1476]: time="2025-02-13T20:04:50.230858331Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:04:50.236797 containerd[1476]: time="2025-02-13T20:04:50.236757230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:04:50.236855 containerd[1476]: time="2025-02-13T20:04:50.236802827Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:04:50.236855 containerd[1476]: time="2025-02-13T20:04:50.236849807Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:04:50.236898 containerd[1476]: time="2025-02-13T20:04:50.236865633Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:04:50.236898 containerd[1476]: time="2025-02-13T20:04:50.236880549Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:04:50.237068 containerd[1476]: time="2025-02-13T20:04:50.237046371Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:04:50.237290 containerd[1476]: time="2025-02-13T20:04:50.237269949Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:04:50.237408 containerd[1476]: time="2025-02-13T20:04:50.237379685Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:04:50.237444 containerd[1476]: time="2025-02-13T20:04:50.237408342Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:04:50.237444 containerd[1476]: time="2025-02-13T20:04:50.237422154Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:04:50.237444 containerd[1476]: time="2025-02-13T20:04:50.237434635Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:04:50.237497 containerd[1476]: time="2025-02-13T20:04:50.237447590Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:04:50.237497 containerd[1476]: time="2025-02-13T20:04:50.237461578Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:04:50.237497 containerd[1476]: time="2025-02-13T20:04:50.237480975Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:04:50.237560 containerd[1476]: time="2025-02-13T20:04:50.237509416Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:04:50.237560 containerd[1476]: time="2025-02-13T20:04:50.237523115Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:04:50.237560 containerd[1476]: time="2025-02-13T20:04:50.237535843Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:04:50.237560 containerd[1476]: time="2025-02-13T20:04:50.237546290Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:04:50.237638 containerd[1476]: time="2025-02-13T20:04:50.237565006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237638 containerd[1476]: time="2025-02-13T20:04:50.237579489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237638 containerd[1476]: time="2025-02-13T20:04:50.237591990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237638 containerd[1476]: time="2025-02-13T20:04:50.237604203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237638 containerd[1476]: time="2025-02-13T20:04:50.237617344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237638 containerd[1476]: time="2025-02-13T20:04:50.237630950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237638 containerd[1476]: time="2025-02-13T20:04:50.237642327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237655757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237670302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237684455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237697059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237709024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237721402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237736267Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237755901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237771 containerd[1476]: time="2025-02-13T20:04:50.237767381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237778592Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237837681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237852000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237862117Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237873771Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237884064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237895770Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237905143Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:04:50.237943 containerd[1476]: time="2025-02-13T20:04:50.237915766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:04:50.238230 containerd[1476]: time="2025-02-13T20:04:50.238176157Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:04:50.238400 containerd[1476]: time="2025-02-13T20:04:50.238230364Z" level=info msg="Connect containerd service" Feb 13 20:04:50.238400 containerd[1476]: time="2025-02-13T20:04:50.238268643Z" level=info msg="using legacy CRI server" Feb 13 20:04:50.238400 containerd[1476]: time="2025-02-13T20:04:50.238275590Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:04:50.238400 containerd[1476]: time="2025-02-13T20:04:50.238349132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:04:50.240117 containerd[1476]: time="2025-02-13T20:04:50.240085074Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:04:50.240494 containerd[1476]: time="2025-02-13T20:04:50.240402510Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:04:50.240494 containerd[1476]: time="2025-02-13T20:04:50.240455706Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:04:50.240541 containerd[1476]: time="2025-02-13T20:04:50.240511905Z" level=info msg="Start subscribing containerd event" Feb 13 20:04:50.240561 containerd[1476]: time="2025-02-13T20:04:50.240553383Z" level=info msg="Start recovering state" Feb 13 20:04:50.240629 containerd[1476]: time="2025-02-13T20:04:50.240608044Z" level=info msg="Start event monitor" Feb 13 20:04:50.240660 containerd[1476]: time="2025-02-13T20:04:50.240634946Z" level=info msg="Start snapshots syncer" Feb 13 20:04:50.240660 containerd[1476]: time="2025-02-13T20:04:50.240644733Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:04:50.240660 containerd[1476]: time="2025-02-13T20:04:50.240652537Z" level=info msg="Start streaming server" Feb 13 20:04:50.240840 containerd[1476]: time="2025-02-13T20:04:50.240699942Z" level=info msg="containerd successfully booted in 0.038413s" Feb 13 20:04:50.241938 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:04:50.265938 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:04:50.290802 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:04:50.297120 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:04:50.299046 systemd[1]: Started sshd@0-10.0.0.159:22-10.0.0.1:50520.service - OpenSSH per-connection server daemon (10.0.0.1:50520). Feb 13 20:04:50.305655 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:04:50.305903 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:04:50.309952 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:04:50.326577 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:04:50.334170 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:04:50.336773 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:04:50.338181 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:04:50.351488 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 50520 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:04:50.353760 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:50.362784 systemd-logind[1462]: New session 1 of user core. Feb 13 20:04:50.364082 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:04:50.376043 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:04:50.387897 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:04:50.397443 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:04:50.401604 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:04:50.462431 tar[1472]: linux-amd64/LICENSE Feb 13 20:04:50.462532 tar[1472]: linux-amd64/README.md Feb 13 20:04:50.476620 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:04:50.513806 systemd[1541]: Queued start job for default target default.target. Feb 13 20:04:50.528436 systemd[1541]: Created slice app.slice - User Application Slice. Feb 13 20:04:50.528469 systemd[1541]: Reached target paths.target - Paths. Feb 13 20:04:50.528489 systemd[1541]: Reached target timers.target - Timers. Feb 13 20:04:50.530438 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:04:50.543118 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:04:50.543305 systemd[1541]: Reached target sockets.target - Sockets. Feb 13 20:04:50.543332 systemd[1541]: Reached target basic.target - Basic System. Feb 13 20:04:50.543381 systemd[1541]: Reached target default.target - Main User Target. Feb 13 20:04:50.543440 systemd[1541]: Startup finished in 134ms. Feb 13 20:04:50.543767 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:04:50.546340 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:04:50.613513 systemd[1]: Started sshd@1-10.0.0.159:22-10.0.0.1:50532.service - OpenSSH per-connection server daemon (10.0.0.1:50532). Feb 13 20:04:50.656572 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 50532 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:04:50.658298 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:50.662757 systemd-logind[1462]: New session 2 of user core. Feb 13 20:04:50.672958 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:04:50.730232 sshd[1555]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:50.737726 systemd[1]: sshd@1-10.0.0.159:22-10.0.0.1:50532.service: Deactivated successfully. Feb 13 20:04:50.739662 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:04:50.741051 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:04:50.757126 systemd[1]: Started sshd@2-10.0.0.159:22-10.0.0.1:50544.service - OpenSSH per-connection server daemon (10.0.0.1:50544). Feb 13 20:04:50.759518 systemd-logind[1462]: Removed session 2. Feb 13 20:04:50.791039 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 50544 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:04:50.792494 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:50.796275 systemd-logind[1462]: New session 3 of user core. Feb 13 20:04:50.811935 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:04:50.868064 sshd[1562]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:50.872119 systemd[1]: sshd@2-10.0.0.159:22-10.0.0.1:50544.service: Deactivated successfully. Feb 13 20:04:50.873938 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:04:50.874589 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:04:50.875380 systemd-logind[1462]: Removed session 3. Feb 13 20:04:50.935964 systemd-networkd[1407]: eth0: Gained IPv6LL Feb 13 20:04:50.939166 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:04:50.940929 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:04:50.952030 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:04:50.954630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:04:50.957164 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:04:50.976745 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:04:50.977153 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:04:50.978933 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:04:50.982504 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:04:51.574598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:04:51.576263 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:04:51.580040 systemd[1]: Startup finished in 677ms (kernel) + 5.173s (initrd) + 3.745s (userspace) = 9.596s. Feb 13 20:04:51.580724 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:04:51.984361 kubelet[1590]: E0213 20:04:51.984163 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:04:51.988588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:04:51.988807 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:05:01.036327 systemd[1]: Started sshd@3-10.0.0.159:22-10.0.0.1:56740.service - OpenSSH per-connection server daemon (10.0.0.1:56740). Feb 13 20:05:01.073543 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 56740 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:05:01.075024 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:01.078620 systemd-logind[1462]: New session 4 of user core. Feb 13 20:05:01.087927 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:05:01.141429 sshd[1603]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:01.159931 systemd[1]: sshd@3-10.0.0.159:22-10.0.0.1:56740.service: Deactivated successfully. Feb 13 20:05:01.162089 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:05:01.163581 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:05:01.176122 systemd[1]: Started sshd@4-10.0.0.159:22-10.0.0.1:56750.service - OpenSSH per-connection server daemon (10.0.0.1:56750). Feb 13 20:05:01.177156 systemd-logind[1462]: Removed session 4. Feb 13 20:05:01.209738 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 56750 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:05:01.211459 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:01.215370 systemd-logind[1462]: New session 5 of user core. Feb 13 20:05:01.224909 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:05:01.274660 sshd[1610]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:01.287903 systemd[1]: sshd@4-10.0.0.159:22-10.0.0.1:56750.service: Deactivated successfully. Feb 13 20:05:01.289756 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:05:01.291402 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:05:01.300188 systemd[1]: Started sshd@5-10.0.0.159:22-10.0.0.1:56758.service - OpenSSH per-connection server daemon (10.0.0.1:56758). Feb 13 20:05:01.301253 systemd-logind[1462]: Removed session 5. Feb 13 20:05:01.334196 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 56758 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:05:01.335729 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:01.339903 systemd-logind[1462]: New session 6 of user core. Feb 13 20:05:01.349904 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:05:01.403429 sshd[1617]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:01.417585 systemd[1]: sshd@5-10.0.0.159:22-10.0.0.1:56758.service: Deactivated successfully. Feb 13 20:05:01.419355 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:05:01.420955 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:05:01.430133 systemd[1]: Started sshd@6-10.0.0.159:22-10.0.0.1:56766.service - OpenSSH per-connection server daemon (10.0.0.1:56766). Feb 13 20:05:01.430937 systemd-logind[1462]: Removed session 6. Feb 13 20:05:01.463189 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 56766 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:05:01.464625 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:01.468320 systemd-logind[1462]: New session 7 of user core. Feb 13 20:05:01.477893 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:05:01.534560 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:05:01.534900 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:05:01.552521 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 20:05:01.554085 sshd[1624]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:01.563476 systemd[1]: sshd@6-10.0.0.159:22-10.0.0.1:56766.service: Deactivated successfully. Feb 13 20:05:01.564997 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:05:01.566553 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:05:01.574016 systemd[1]: Started sshd@7-10.0.0.159:22-10.0.0.1:56778.service - OpenSSH per-connection server daemon (10.0.0.1:56778). Feb 13 20:05:01.574781 systemd-logind[1462]: Removed session 7. Feb 13 20:05:01.608262 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 56778 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:05:01.609756 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:01.613468 systemd-logind[1462]: New session 8 of user core. Feb 13 20:05:01.622914 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:05:01.676228 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:05:01.676558 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:05:01.679821 sudo[1636]: pam_unix(sudo:session): session closed for user root Feb 13 20:05:01.685960 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:05:01.686308 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:05:01.712042 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:05:01.713555 auditctl[1639]: No rules Feb 13 20:05:01.714852 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:05:01.715115 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:05:01.716967 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:05:01.746522 augenrules[1657]: No rules Feb 13 20:05:01.748493 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:05:01.749817 sudo[1635]: pam_unix(sudo:session): session closed for user root Feb 13 20:05:01.751562 sshd[1632]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:01.762556 systemd[1]: sshd@7-10.0.0.159:22-10.0.0.1:56778.service: Deactivated successfully. Feb 13 20:05:01.764276 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:05:01.765981 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:05:01.767270 systemd[1]: Started sshd@8-10.0.0.159:22-10.0.0.1:56786.service - OpenSSH per-connection server daemon (10.0.0.1:56786). Feb 13 20:05:01.768085 systemd-logind[1462]: Removed session 8. Feb 13 20:05:01.804894 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 56786 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:05:01.806347 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:01.809977 systemd-logind[1462]: New session 9 of user core. Feb 13 20:05:01.819913 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:05:01.872613 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:05:01.872982 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:05:02.142971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:05:02.152022 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:05:02.152115 (dockerd)[1686]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:05:02.153369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:02.315161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:02.320713 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:05:02.362293 kubelet[1700]: E0213 20:05:02.362153 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:05:02.368530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:05:02.368738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:05:02.429929 dockerd[1686]: time="2025-02-13T20:05:02.429775510Z" level=info msg="Starting up" Feb 13 20:05:02.804398 dockerd[1686]: time="2025-02-13T20:05:02.804239191Z" level=info msg="Loading containers: start." Feb 13 20:05:02.913818 kernel: Initializing XFRM netlink socket Feb 13 20:05:02.993156 systemd-networkd[1407]: docker0: Link UP Feb 13 20:05:03.018265 dockerd[1686]: time="2025-02-13T20:05:03.018224931Z" level=info msg="Loading containers: done." Feb 13 20:05:03.032166 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2334676456-merged.mount: Deactivated successfully. Feb 13 20:05:03.033922 dockerd[1686]: time="2025-02-13T20:05:03.033861966Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:05:03.034053 dockerd[1686]: time="2025-02-13T20:05:03.033974394Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:05:03.034110 dockerd[1686]: time="2025-02-13T20:05:03.034087166Z" level=info msg="Daemon has completed initialization" Feb 13 20:05:03.070420 dockerd[1686]: time="2025-02-13T20:05:03.070045561Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:05:03.070215 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:05:03.742132 containerd[1476]: time="2025-02-13T20:05:03.742081438Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:05:04.364160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760688132.mount: Deactivated successfully. Feb 13 20:05:05.426472 containerd[1476]: time="2025-02-13T20:05:05.426415340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:05.427559 containerd[1476]: time="2025-02-13T20:05:05.427501564Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 20:05:05.428901 containerd[1476]: time="2025-02-13T20:05:05.428871294Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:05.431615 containerd[1476]: time="2025-02-13T20:05:05.431576691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:05.432741 containerd[1476]: time="2025-02-13T20:05:05.432694200Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.690568176s" Feb 13 20:05:05.432870 containerd[1476]: time="2025-02-13T20:05:05.432743957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 20:05:05.434129 containerd[1476]: time="2025-02-13T20:05:05.434083065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:05:06.952433 containerd[1476]: time="2025-02-13T20:05:06.952377041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:06.953893 containerd[1476]: time="2025-02-13T20:05:06.953823373Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 20:05:06.963663 containerd[1476]: time="2025-02-13T20:05:06.963624858Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:06.968239 containerd[1476]: time="2025-02-13T20:05:06.968181026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:06.969082 containerd[1476]: time="2025-02-13T20:05:06.969042777Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.534926158s" Feb 13 20:05:06.969082 containerd[1476]: time="2025-02-13T20:05:06.969076419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 20:05:06.969682 containerd[1476]: time="2025-02-13T20:05:06.969618510Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:05:08.329135 containerd[1476]: time="2025-02-13T20:05:08.329064761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:08.329810 containerd[1476]: time="2025-02-13T20:05:08.329726540Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 20:05:08.331027 containerd[1476]: time="2025-02-13T20:05:08.330989933Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:08.333796 containerd[1476]: time="2025-02-13T20:05:08.333760975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:08.334945 containerd[1476]: time="2025-02-13T20:05:08.334907669Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.365053656s" Feb 13 20:05:08.334981 containerd[1476]: time="2025-02-13T20:05:08.334944107Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 20:05:08.335485 containerd[1476]: time="2025-02-13T20:05:08.335454781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:05:09.253219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425696659.mount: Deactivated successfully. Feb 13 20:05:10.168152 containerd[1476]: time="2025-02-13T20:05:10.168078802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:10.168826 containerd[1476]: time="2025-02-13T20:05:10.168729696Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 20:05:10.169915 containerd[1476]: time="2025-02-13T20:05:10.169879731Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:10.171718 containerd[1476]: time="2025-02-13T20:05:10.171682969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:10.172328 containerd[1476]: time="2025-02-13T20:05:10.172273293Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.836780663s" Feb 13 20:05:10.172328 containerd[1476]: time="2025-02-13T20:05:10.172319566Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 20:05:10.172875 containerd[1476]: time="2025-02-13T20:05:10.172845223Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:05:11.377885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361002014.mount: Deactivated successfully. Feb 13 20:05:12.586261 containerd[1476]: time="2025-02-13T20:05:12.586189382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:12.586928 containerd[1476]: time="2025-02-13T20:05:12.586855310Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:05:12.588000 containerd[1476]: time="2025-02-13T20:05:12.587966386Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:12.590675 containerd[1476]: time="2025-02-13T20:05:12.590627385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:12.591730 containerd[1476]: time="2025-02-13T20:05:12.591692942Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.418666644s" Feb 13 20:05:12.591730 containerd[1476]: time="2025-02-13T20:05:12.591723428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:05:12.592226 containerd[1476]: time="2025-02-13T20:05:12.592188517Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:05:12.618982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:05:12.631939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:12.773288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:12.778710 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:05:12.998725 kubelet[1975]: E0213 20:05:12.998575 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:05:13.003214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:05:13.003427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:05:13.513290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734564745.mount: Deactivated successfully. Feb 13 20:05:13.518939 containerd[1476]: time="2025-02-13T20:05:13.518899085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:13.519717 containerd[1476]: time="2025-02-13T20:05:13.519685864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 20:05:13.520823 containerd[1476]: time="2025-02-13T20:05:13.520799707Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:13.525018 containerd[1476]: time="2025-02-13T20:05:13.524977578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:13.525819 containerd[1476]: time="2025-02-13T20:05:13.525761107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 933.5346ms" Feb 13 20:05:13.525819 containerd[1476]: time="2025-02-13T20:05:13.525807288Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:05:13.526327 containerd[1476]: time="2025-02-13T20:05:13.526304759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:05:14.043189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117187106.mount: Deactivated successfully. Feb 13 20:05:16.879175 containerd[1476]: time="2025-02-13T20:05:16.879096796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:16.879885 containerd[1476]: time="2025-02-13T20:05:16.879819791Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 20:05:16.881145 containerd[1476]: time="2025-02-13T20:05:16.881108751Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:16.883986 containerd[1476]: time="2025-02-13T20:05:16.883951477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:16.885344 containerd[1476]: time="2025-02-13T20:05:16.885309431Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.35897535s" Feb 13 20:05:16.885391 containerd[1476]: time="2025-02-13T20:05:16.885344029Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 20:05:18.947186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:18.959140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:18.985053 systemd[1]: Reloading requested from client PID 2070 ('systemctl') (unit session-9.scope)... Feb 13 20:05:18.985069 systemd[1]: Reloading... Feb 13 20:05:19.066179 zram_generator::config[2109]: No configuration found. Feb 13 20:05:19.446087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:05:19.522091 systemd[1]: Reloading finished in 536 ms. Feb 13 20:05:19.574109 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:05:19.574204 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:05:19.574528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:19.576123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:19.722989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:19.727897 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:05:19.765158 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:05:19.765158 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:05:19.765158 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:05:19.769312 kubelet[2157]: I0213 20:05:19.769182 2157 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:05:20.599827 kubelet[2157]: I0213 20:05:20.599761 2157 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:05:20.599827 kubelet[2157]: I0213 20:05:20.599824 2157 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:05:20.600092 kubelet[2157]: I0213 20:05:20.600069 2157 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:05:20.660225 kubelet[2157]: I0213 20:05:20.660174 2157 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:05:20.660450 kubelet[2157]: E0213 20:05:20.660427 2157 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:20.668816 kubelet[2157]: E0213 20:05:20.668771 2157 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:05:20.668816 kubelet[2157]: I0213 20:05:20.668816 2157 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:05:20.674864 kubelet[2157]: I0213 20:05:20.674839 2157 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:05:20.675747 kubelet[2157]: I0213 20:05:20.675719 2157 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:05:20.675948 kubelet[2157]: I0213 20:05:20.675900 2157 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:05:20.676114 kubelet[2157]: I0213 20:05:20.675929 2157 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:05:20.676114 kubelet[2157]: I0213 20:05:20.676111 2157 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:05:20.676220 kubelet[2157]: I0213 20:05:20.676120 2157 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:05:20.676262 kubelet[2157]: I0213 20:05:20.676246 2157 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:05:20.677673 kubelet[2157]: I0213 20:05:20.677648 2157 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:05:20.677673 kubelet[2157]: I0213 20:05:20.677670 2157 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:05:20.677731 kubelet[2157]: I0213 20:05:20.677704 2157 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:05:20.677731 kubelet[2157]: I0213 20:05:20.677718 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:05:20.682682 kubelet[2157]: I0213 20:05:20.682646 2157 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:05:20.684774 kubelet[2157]: I0213 20:05:20.684628 2157 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:05:20.685466 kubelet[2157]: W0213 20:05:20.685188 2157 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:05:20.685466 kubelet[2157]: W0213 20:05:20.685262 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.159:6443: connect: connection refused Feb 13 20:05:20.685466 kubelet[2157]: E0213 20:05:20.685333 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:20.686196 kubelet[2157]: I0213 20:05:20.685895 2157 server.go:1269] "Started kubelet" Feb 13 20:05:20.686196 kubelet[2157]: W0213 20:05:20.685959 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.159:6443: connect: connection refused Feb 13 20:05:20.686196 kubelet[2157]: E0213 20:05:20.686011 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:20.686305 kubelet[2157]: I0213 20:05:20.686177 2157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:05:20.686305 kubelet[2157]: I0213 20:05:20.686228 2157 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:05:20.686968 kubelet[2157]: I0213 20:05:20.686544 2157 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:05:20.687695 kubelet[2157]: I0213 20:05:20.687121 2157 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:05:20.687695 kubelet[2157]: I0213 20:05:20.687149 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:05:20.688655 kubelet[2157]: I0213 20:05:20.687861 2157 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:05:20.689594 kubelet[2157]: E0213 20:05:20.689192 2157 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:05:20.689594 kubelet[2157]: I0213 20:05:20.689454 2157 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:05:20.689703 kubelet[2157]: I0213 20:05:20.689671 2157 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:05:20.689768 kubelet[2157]: I0213 20:05:20.689742 2157 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:05:20.690749 kubelet[2157]: W0213 20:05:20.690026 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.159:6443: connect: connection refused Feb 13 20:05:20.690749 kubelet[2157]: E0213 20:05:20.690083 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:20.690749 kubelet[2157]: I0213 20:05:20.690244 2157 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:05:20.690749 kubelet[2157]: I0213 20:05:20.690330 2157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:05:20.690749 kubelet[2157]: E0213 20:05:20.688543 2157 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.159:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.159:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dd39a1668462 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:05:20.68586813 +0000 UTC m=+0.953950639,LastTimestamp:2025-02-13 20:05:20.68586813 +0000 UTC m=+0.953950639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:05:20.690749 kubelet[2157]: E0213 20:05:20.690617 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:20.691085 kubelet[2157]: E0213 20:05:20.691043 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="200ms" Feb 13 20:05:20.691868 kubelet[2157]: I0213 20:05:20.691850 2157 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:05:20.705130 kubelet[2157]: I0213 20:05:20.705090 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:05:20.706561 kubelet[2157]: I0213 20:05:20.706534 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:05:20.706608 kubelet[2157]: I0213 20:05:20.706581 2157 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:05:20.706608 kubelet[2157]: I0213 20:05:20.706599 2157 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:05:20.706658 kubelet[2157]: E0213 20:05:20.706639 2157 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:05:20.707254 kubelet[2157]: W0213 20:05:20.707152 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.159:6443: connect: connection refused Feb 13 20:05:20.707254 kubelet[2157]: E0213 20:05:20.707209 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:20.710000 kubelet[2157]: I0213 20:05:20.709946 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:05:20.710046 kubelet[2157]: I0213 20:05:20.710026 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:05:20.710082 kubelet[2157]: I0213 20:05:20.710051 2157 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:05:20.791584 kubelet[2157]: E0213 20:05:20.791547 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:20.807764 kubelet[2157]: E0213 20:05:20.807721 2157 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:05:20.821077 kubelet[2157]: I0213 20:05:20.821045 2157 policy_none.go:49] "None policy: Start" Feb 13 20:05:20.821923 kubelet[2157]: I0213 20:05:20.821897 2157 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:05:20.821923 kubelet[2157]: I0213 20:05:20.821918 2157 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:05:20.891660 kubelet[2157]: E0213 20:05:20.891544 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="400ms" Feb 13 20:05:20.891660 kubelet[2157]: E0213 20:05:20.891581 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:20.935741 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:05:20.950087 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:05:20.967326 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:05:20.968363 kubelet[2157]: I0213 20:05:20.968324 2157 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:05:20.968597 kubelet[2157]: I0213 20:05:20.968540 2157 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:05:20.968597 kubelet[2157]: I0213 20:05:20.968550 2157 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:05:20.968820 kubelet[2157]: I0213 20:05:20.968782 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:05:20.969930 kubelet[2157]: E0213 20:05:20.969904 2157 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:05:21.015301 systemd[1]: Created slice kubepods-burstable-pod743604301cf18fae874f95c194252bdc.slice - libcontainer container kubepods-burstable-pod743604301cf18fae874f95c194252bdc.slice. Feb 13 20:05:21.025568 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 20:05:21.039564 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 20:05:21.070153 kubelet[2157]: I0213 20:05:21.070100 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:05:21.070523 kubelet[2157]: E0213 20:05:21.070486 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Feb 13 20:05:21.092929 kubelet[2157]: I0213 20:05:21.092882 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:21.092929 kubelet[2157]: I0213 20:05:21.092920 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:21.093006 kubelet[2157]: I0213 20:05:21.092942 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/743604301cf18fae874f95c194252bdc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"743604301cf18fae874f95c194252bdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:21.093006 kubelet[2157]: I0213 20:05:21.092963 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/743604301cf18fae874f95c194252bdc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"743604301cf18fae874f95c194252bdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:21.093006 kubelet[2157]: I0213 20:05:21.092987 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:21.093083 kubelet[2157]: I0213 20:05:21.093007 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:21.093083 kubelet[2157]: I0213 20:05:21.093030 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/743604301cf18fae874f95c194252bdc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"743604301cf18fae874f95c194252bdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:21.093083 kubelet[2157]: I0213 20:05:21.093047 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:21.093083 kubelet[2157]: I0213 20:05:21.093064 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:21.272291 kubelet[2157]: I0213 20:05:21.272169 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:05:21.272463 kubelet[2157]: E0213 20:05:21.272428 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Feb 13 20:05:21.292017 kubelet[2157]: E0213 20:05:21.291977 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="800ms" Feb 13 20:05:21.323538 kubelet[2157]: E0213 20:05:21.323492 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:21.324177 containerd[1476]: time="2025-02-13T20:05:21.324123553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:743604301cf18fae874f95c194252bdc,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:21.339416 kubelet[2157]: E0213 20:05:21.339360 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:21.339931 containerd[1476]: time="2025-02-13T20:05:21.339878566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:21.342131 kubelet[2157]: E0213 20:05:21.342099 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:21.344541 containerd[1476]: time="2025-02-13T20:05:21.344492207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:21.630476 kubelet[2157]: W0213 20:05:21.630356 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.159:6443: connect: connection refused Feb 13 20:05:21.630476 kubelet[2157]: E0213 20:05:21.630417 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:21.673857 kubelet[2157]: I0213 20:05:21.673838 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:05:21.674042 kubelet[2157]: E0213 20:05:21.674010 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Feb 13 20:05:21.777220 kubelet[2157]: W0213 20:05:21.777154 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.159:6443: connect: connection refused Feb 13 20:05:21.777220 kubelet[2157]: E0213 20:05:21.777221 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:22.092737 kubelet[2157]: E0213 20:05:22.092590 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="1.6s" Feb 13 20:05:22.166475 kubelet[2157]: W0213 20:05:22.166404 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.159:6443: connect: connection refused Feb 13 20:05:22.166607 kubelet[2157]: E0213 20:05:22.166484 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:22.204158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144748653.mount: Deactivated successfully. Feb 13 20:05:22.212718 containerd[1476]: time="2025-02-13T20:05:22.212661715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:22.213587 containerd[1476]: time="2025-02-13T20:05:22.213561310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:22.214551 containerd[1476]: time="2025-02-13T20:05:22.214471238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:05:22.215344 containerd[1476]: time="2025-02-13T20:05:22.215298276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:05:22.216309 containerd[1476]: time="2025-02-13T20:05:22.216274547Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:22.217084 containerd[1476]: time="2025-02-13T20:05:22.217048375Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:22.217946 containerd[1476]: time="2025-02-13T20:05:22.217898059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:05:22.219908 containerd[1476]: time="2025-02-13T20:05:22.219879510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:05:22.221426 containerd[1476]: time="2025-02-13T20:05:22.221395661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 881.436394ms" Feb 13 20:05:22.222717 containerd[1476]: time="2025-02-13T20:05:22.222688470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 898.485919ms" Feb 13 20:05:22.225465 containerd[1476]: time="2025-02-13T20:05:22.225434697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 880.880014ms" Feb 13 20:05:22.242400 kubelet[2157]: W0213 20:05:22.242337 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.159:6443: connect: connection refused Feb 13 20:05:22.242488 kubelet[2157]: E0213 20:05:22.242404 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:05:22.347990 containerd[1476]: time="2025-02-13T20:05:22.347482255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:22.347990 containerd[1476]: time="2025-02-13T20:05:22.347556385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:22.347990 containerd[1476]: time="2025-02-13T20:05:22.347571933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:22.347990 containerd[1476]: time="2025-02-13T20:05:22.347655757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:22.349877 containerd[1476]: time="2025-02-13T20:05:22.347552395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:22.349877 containerd[1476]: time="2025-02-13T20:05:22.349098499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:22.349877 containerd[1476]: time="2025-02-13T20:05:22.349111280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:22.349877 containerd[1476]: time="2025-02-13T20:05:22.349187546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:22.349877 containerd[1476]: time="2025-02-13T20:05:22.348773812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:22.349877 containerd[1476]: time="2025-02-13T20:05:22.348832895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:22.349877 containerd[1476]: time="2025-02-13T20:05:22.348843961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:22.349877 containerd[1476]: time="2025-02-13T20:05:22.348927133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:22.376920 systemd[1]: Started cri-containerd-28e6c5faa85723356bc8e36cd2bc11a45c27cf718f69d0f55a63bbcfaf50ad19.scope - libcontainer container 28e6c5faa85723356bc8e36cd2bc11a45c27cf718f69d0f55a63bbcfaf50ad19. Feb 13 20:05:22.378852 systemd[1]: Started cri-containerd-5133e6b311eaa42c8abd6e4f75c6a4640be6ac31f86e9356adf9763008d755ea.scope - libcontainer container 5133e6b311eaa42c8abd6e4f75c6a4640be6ac31f86e9356adf9763008d755ea. Feb 13 20:05:22.380412 systemd[1]: Started cri-containerd-d411b21e3c4639912a933ca88a36d853f217c330824b11b1a76cd99ee5575c53.scope - libcontainer container d411b21e3c4639912a933ca88a36d853f217c330824b11b1a76cd99ee5575c53. Feb 13 20:05:22.420069 containerd[1476]: time="2025-02-13T20:05:22.420018893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"28e6c5faa85723356bc8e36cd2bc11a45c27cf718f69d0f55a63bbcfaf50ad19\"" Feb 13 20:05:22.422285 kubelet[2157]: E0213 20:05:22.422101 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:22.423962 containerd[1476]: time="2025-02-13T20:05:22.423922581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d411b21e3c4639912a933ca88a36d853f217c330824b11b1a76cd99ee5575c53\"" Feb 13 20:05:22.424520 kubelet[2157]: E0213 20:05:22.424485 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:22.425105 containerd[1476]: time="2025-02-13T20:05:22.424968109Z" level=info msg="CreateContainer within sandbox \"28e6c5faa85723356bc8e36cd2bc11a45c27cf718f69d0f55a63bbcfaf50ad19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:05:22.425105 containerd[1476]: time="2025-02-13T20:05:22.425017619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:743604301cf18fae874f95c194252bdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5133e6b311eaa42c8abd6e4f75c6a4640be6ac31f86e9356adf9763008d755ea\"" Feb 13 20:05:22.425782 kubelet[2157]: E0213 20:05:22.425760 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:22.426370 containerd[1476]: time="2025-02-13T20:05:22.426223778Z" level=info msg="CreateContainer within sandbox \"d411b21e3c4639912a933ca88a36d853f217c330824b11b1a76cd99ee5575c53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:05:22.427756 containerd[1476]: time="2025-02-13T20:05:22.427713043Z" level=info msg="CreateContainer within sandbox \"5133e6b311eaa42c8abd6e4f75c6a4640be6ac31f86e9356adf9763008d755ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:05:22.458385 containerd[1476]: time="2025-02-13T20:05:22.458346341Z" level=info msg="CreateContainer within sandbox \"d411b21e3c4639912a933ca88a36d853f217c330824b11b1a76cd99ee5575c53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0d566c4203c86c617543f9254a76756b102e7fbfb4bbc282dd732918260affd\"" Feb 13 20:05:22.459029 containerd[1476]: time="2025-02-13T20:05:22.459006192Z" level=info msg="StartContainer for \"f0d566c4203c86c617543f9254a76756b102e7fbfb4bbc282dd732918260affd\"" Feb 13 20:05:22.460409 containerd[1476]: time="2025-02-13T20:05:22.460363347Z" level=info msg="CreateContainer within sandbox \"28e6c5faa85723356bc8e36cd2bc11a45c27cf718f69d0f55a63bbcfaf50ad19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7bf1d703e48a64a1285f066b5a72e33407dd6862321978a0526aef529fd4a912\"" Feb 13 20:05:22.460694 containerd[1476]: time="2025-02-13T20:05:22.460670152Z" level=info msg="StartContainer for \"7bf1d703e48a64a1285f066b5a72e33407dd6862321978a0526aef529fd4a912\"" Feb 13 20:05:22.463590 containerd[1476]: time="2025-02-13T20:05:22.463509436Z" level=info msg="CreateContainer within sandbox \"5133e6b311eaa42c8abd6e4f75c6a4640be6ac31f86e9356adf9763008d755ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1d0589682e76ec1c512dd1860b1b1b0873b206e8b5efd69123fee918a0741859\"" Feb 13 20:05:22.464161 containerd[1476]: time="2025-02-13T20:05:22.464137902Z" level=info msg="StartContainer for \"1d0589682e76ec1c512dd1860b1b1b0873b206e8b5efd69123fee918a0741859\"" Feb 13 20:05:22.475703 kubelet[2157]: I0213 20:05:22.475622 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:05:22.476256 kubelet[2157]: E0213 20:05:22.476171 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Feb 13 20:05:22.483921 systemd[1]: Started cri-containerd-f0d566c4203c86c617543f9254a76756b102e7fbfb4bbc282dd732918260affd.scope - libcontainer container f0d566c4203c86c617543f9254a76756b102e7fbfb4bbc282dd732918260affd. Feb 13 20:05:22.493950 systemd[1]: Started cri-containerd-7bf1d703e48a64a1285f066b5a72e33407dd6862321978a0526aef529fd4a912.scope - libcontainer container 7bf1d703e48a64a1285f066b5a72e33407dd6862321978a0526aef529fd4a912. Feb 13 20:05:22.496620 systemd[1]: Started cri-containerd-1d0589682e76ec1c512dd1860b1b1b0873b206e8b5efd69123fee918a0741859.scope - libcontainer container 1d0589682e76ec1c512dd1860b1b1b0873b206e8b5efd69123fee918a0741859. Feb 13 20:05:22.530037 containerd[1476]: time="2025-02-13T20:05:22.529990921Z" level=info msg="StartContainer for \"f0d566c4203c86c617543f9254a76756b102e7fbfb4bbc282dd732918260affd\" returns successfully" Feb 13 20:05:22.539570 containerd[1476]: time="2025-02-13T20:05:22.539438400Z" level=info msg="StartContainer for \"7bf1d703e48a64a1285f066b5a72e33407dd6862321978a0526aef529fd4a912\" returns successfully" Feb 13 20:05:22.544717 containerd[1476]: time="2025-02-13T20:05:22.544662013Z" level=info msg="StartContainer for \"1d0589682e76ec1c512dd1860b1b1b0873b206e8b5efd69123fee918a0741859\" returns successfully" Feb 13 20:05:22.717883 kubelet[2157]: E0213 20:05:22.715354 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:22.723488 kubelet[2157]: E0213 20:05:22.723441 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:22.726035 kubelet[2157]: E0213 20:05:22.726012 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:23.726499 kubelet[2157]: E0213 20:05:23.726426 2157 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:05:23.729752 kubelet[2157]: E0213 20:05:23.729351 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:23.808267 kubelet[2157]: E0213 20:05:23.808227 2157 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 20:05:24.077800 kubelet[2157]: I0213 20:05:24.077680 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:05:24.085892 kubelet[2157]: I0213 20:05:24.085861 2157 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 20:05:24.085892 kubelet[2157]: E0213 20:05:24.085890 2157 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 20:05:24.092181 kubelet[2157]: E0213 20:05:24.092155 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:24.155608 kubelet[2157]: E0213 20:05:24.155584 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:24.192509 kubelet[2157]: E0213 20:05:24.192457 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:24.292966 kubelet[2157]: E0213 20:05:24.292923 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:24.685271 kubelet[2157]: I0213 20:05:24.685220 2157 apiserver.go:52] "Watching apiserver" Feb 13 20:05:24.689963 kubelet[2157]: I0213 20:05:24.689934 2157 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:05:24.736704 kubelet[2157]: E0213 20:05:24.736668 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:25.730373 kubelet[2157]: E0213 20:05:25.730344 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:26.427219 systemd[1]: Reloading requested from client PID 2441 ('systemctl') (unit session-9.scope)... Feb 13 20:05:26.427235 systemd[1]: Reloading... Feb 13 20:05:26.499027 zram_generator::config[2486]: No configuration found. Feb 13 20:05:26.612469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:05:26.701392 systemd[1]: Reloading finished in 273 ms. Feb 13 20:05:26.751071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:26.772180 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:05:26.772405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:26.789010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:05:26.926605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:05:26.937096 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:05:26.973294 kubelet[2525]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:05:26.973294 kubelet[2525]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:05:26.973294 kubelet[2525]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:05:26.973656 kubelet[2525]: I0213 20:05:26.973289 2525 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:05:26.980758 kubelet[2525]: I0213 20:05:26.980714 2525 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:05:26.980758 kubelet[2525]: I0213 20:05:26.980737 2525 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:05:26.981406 kubelet[2525]: I0213 20:05:26.981365 2525 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:05:26.982817 kubelet[2525]: I0213 20:05:26.982781 2525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:05:26.984717 kubelet[2525]: I0213 20:05:26.984683 2525 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:05:26.988407 kubelet[2525]: E0213 20:05:26.988372 2525 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:05:26.988453 kubelet[2525]: I0213 20:05:26.988408 2525 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:05:26.994012 kubelet[2525]: I0213 20:05:26.993996 2525 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:05:26.994121 kubelet[2525]: I0213 20:05:26.994106 2525 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:05:26.994273 kubelet[2525]: I0213 20:05:26.994245 2525 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:05:26.994478 kubelet[2525]: I0213 20:05:26.994271 2525 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:05:26.994552 kubelet[2525]: I0213 20:05:26.994484 2525 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:05:26.994552 kubelet[2525]: I0213 20:05:26.994494 2525 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:05:26.994552 kubelet[2525]: I0213 20:05:26.994520 2525 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:05:26.994654 kubelet[2525]: I0213 20:05:26.994637 2525 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:05:26.994654 kubelet[2525]: I0213 20:05:26.994652 2525 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:05:26.994710 kubelet[2525]: I0213 20:05:26.994688 2525 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:05:26.994710 kubelet[2525]: I0213 20:05:26.994702 2525 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:05:26.995431 kubelet[2525]: I0213 20:05:26.995322 2525 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:05:26.995939 kubelet[2525]: I0213 20:05:26.995922 2525 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:05:26.997769 kubelet[2525]: I0213 20:05:26.996330 2525 server.go:1269] "Started kubelet" Feb 13 20:05:26.997769 kubelet[2525]: I0213 20:05:26.996528 2525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:05:26.997769 kubelet[2525]: I0213 20:05:26.996772 2525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:05:26.997769 kubelet[2525]: I0213 20:05:26.997662 2525 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:05:26.999186 kubelet[2525]: I0213 20:05:26.999153 2525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:05:27.002698 kubelet[2525]: I0213 20:05:27.002679 2525 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:05:27.003365 kubelet[2525]: I0213 20:05:27.003326 2525 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:05:27.003607 kubelet[2525]: E0213 20:05:27.003584 2525 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:05:27.003697 kubelet[2525]: I0213 20:05:27.003672 2525 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:05:27.003827 kubelet[2525]: I0213 20:05:27.003807 2525 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:05:27.003997 kubelet[2525]: I0213 20:05:27.003954 2525 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:05:27.011931 kubelet[2525]: I0213 20:05:27.011903 2525 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:05:27.012738 kubelet[2525]: I0213 20:05:27.012705 2525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:05:27.014135 kubelet[2525]: E0213 20:05:27.014102 2525 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:05:27.014415 kubelet[2525]: I0213 20:05:27.014398 2525 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:05:27.024024 kubelet[2525]: I0213 20:05:27.023857 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:05:27.025857 kubelet[2525]: I0213 20:05:27.025844 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:05:27.025957 kubelet[2525]: I0213 20:05:27.025944 2525 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:05:27.026012 kubelet[2525]: I0213 20:05:27.026003 2525 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:05:27.026097 kubelet[2525]: E0213 20:05:27.026081 2525 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:05:27.047082 kubelet[2525]: I0213 20:05:27.047055 2525 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:05:27.047222 kubelet[2525]: I0213 20:05:27.047210 2525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:05:27.047339 kubelet[2525]: I0213 20:05:27.047329 2525 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:05:27.047586 kubelet[2525]: I0213 20:05:27.047496 2525 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:05:27.047586 kubelet[2525]: I0213 20:05:27.047509 2525 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:05:27.047586 kubelet[2525]: I0213 20:05:27.047527 2525 policy_none.go:49] "None policy: Start" Feb 13 20:05:27.048090 kubelet[2525]: I0213 20:05:27.048051 2525 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:05:27.048171 kubelet[2525]: I0213 20:05:27.048098 2525 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:05:27.048257 kubelet[2525]: I0213 20:05:27.048243 2525 state_mem.go:75] "Updated machine memory state" Feb 13 20:05:27.052103 kubelet[2525]: I0213 20:05:27.052077 2525 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:05:27.052263 kubelet[2525]: I0213 20:05:27.052237 2525 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:05:27.052301 kubelet[2525]: I0213 20:05:27.052256 2525 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:05:27.052671 kubelet[2525]: I0213 20:05:27.052552 2525 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:05:27.133509 kubelet[2525]: E0213 20:05:27.133457 2525 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:27.156648 kubelet[2525]: I0213 20:05:27.156615 2525 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:05:27.161299 kubelet[2525]: I0213 20:05:27.161275 2525 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 20:05:27.161376 kubelet[2525]: I0213 20:05:27.161355 2525 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 20:05:27.204291 kubelet[2525]: I0213 20:05:27.204254 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:27.204291 kubelet[2525]: I0213 20:05:27.204280 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:27.204291 kubelet[2525]: I0213 20:05:27.204297 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:27.204291 kubelet[2525]: I0213 20:05:27.204311 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:05:27.204513 kubelet[2525]: I0213 20:05:27.204330 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/743604301cf18fae874f95c194252bdc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"743604301cf18fae874f95c194252bdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:27.204513 kubelet[2525]: I0213 20:05:27.204347 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/743604301cf18fae874f95c194252bdc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"743604301cf18fae874f95c194252bdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:27.204513 kubelet[2525]: I0213 20:05:27.204362 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:27.204513 kubelet[2525]: I0213 20:05:27.204378 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:05:27.204513 kubelet[2525]: I0213 20:05:27.204417 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/743604301cf18fae874f95c194252bdc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"743604301cf18fae874f95c194252bdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:05:27.433457 kubelet[2525]: E0213 20:05:27.433343 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:27.433457 kubelet[2525]: E0213 20:05:27.433360 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:27.434150 kubelet[2525]: E0213 20:05:27.433626 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:27.995672 kubelet[2525]: I0213 20:05:27.995492 2525 apiserver.go:52] "Watching apiserver" Feb 13 20:05:28.004471 kubelet[2525]: I0213 20:05:28.004426 2525 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:05:28.037584 kubelet[2525]: E0213 20:05:28.037545 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:28.037977 kubelet[2525]: E0213 20:05:28.037954 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:28.038144 kubelet[2525]: E0213 20:05:28.038122 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:28.064806 kubelet[2525]: I0213 20:05:28.064715 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.064699679 podStartE2EDuration="1.064699679s" podCreationTimestamp="2025-02-13 20:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:05:28.063432001 +0000 UTC m=+1.122706155" watchObservedRunningTime="2025-02-13 20:05:28.064699679 +0000 UTC m=+1.123973823" Feb 13 20:05:28.082661 kubelet[2525]: I0213 20:05:28.082589 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.082572426 podStartE2EDuration="4.082572426s" podCreationTimestamp="2025-02-13 20:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:05:28.075242885 +0000 UTC m=+1.134517029" watchObservedRunningTime="2025-02-13 20:05:28.082572426 +0000 UTC m=+1.141846580" Feb 13 20:05:29.038520 kubelet[2525]: E0213 20:05:29.038477 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:30.039408 kubelet[2525]: E0213 20:05:30.039369 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:31.058763 sudo[1668]: pam_unix(sudo:session): session closed for user root Feb 13 20:05:31.060731 sshd[1665]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:31.064750 systemd[1]: sshd@8-10.0.0.159:22-10.0.0.1:56786.service: Deactivated successfully. Feb 13 20:05:31.066728 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:05:31.066963 systemd[1]: session-9.scope: Consumed 3.913s CPU time, 158.1M memory peak, 0B memory swap peak. Feb 13 20:05:31.067485 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:05:31.068287 systemd-logind[1462]: Removed session 9. Feb 13 20:05:32.076130 kubelet[2525]: I0213 20:05:32.076094 2525 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:05:32.076524 containerd[1476]: time="2025-02-13T20:05:32.076431513Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:05:32.076778 kubelet[2525]: I0213 20:05:32.076659 2525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:05:32.217244 kubelet[2525]: I0213 20:05:32.217177 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.21715725 podStartE2EDuration="5.21715725s" podCreationTimestamp="2025-02-13 20:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:05:28.082883583 +0000 UTC m=+1.142157737" watchObservedRunningTime="2025-02-13 20:05:32.21715725 +0000 UTC m=+5.276431404" Feb 13 20:05:32.225119 systemd[1]: Created slice kubepods-besteffort-pod81146ab4_4b36_4e19_b829_fdba5a1ec180.slice - libcontainer container kubepods-besteffort-pod81146ab4_4b36_4e19_b829_fdba5a1ec180.slice. Feb 13 20:05:32.237837 kubelet[2525]: I0213 20:05:32.237806 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81146ab4-4b36-4e19-b829-fdba5a1ec180-xtables-lock\") pod \"kube-proxy-4m9nc\" (UID: \"81146ab4-4b36-4e19-b829-fdba5a1ec180\") " pod="kube-system/kube-proxy-4m9nc" Feb 13 20:05:32.237924 kubelet[2525]: I0213 20:05:32.237844 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drqn\" (UniqueName: \"kubernetes.io/projected/81146ab4-4b36-4e19-b829-fdba5a1ec180-kube-api-access-8drqn\") pod \"kube-proxy-4m9nc\" (UID: \"81146ab4-4b36-4e19-b829-fdba5a1ec180\") " pod="kube-system/kube-proxy-4m9nc" Feb 13 20:05:32.237924 kubelet[2525]: I0213 20:05:32.237864 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/81146ab4-4b36-4e19-b829-fdba5a1ec180-kube-proxy\") pod \"kube-proxy-4m9nc\" (UID: \"81146ab4-4b36-4e19-b829-fdba5a1ec180\") " pod="kube-system/kube-proxy-4m9nc" Feb 13 20:05:32.237924 kubelet[2525]: I0213 20:05:32.237880 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81146ab4-4b36-4e19-b829-fdba5a1ec180-lib-modules\") pod \"kube-proxy-4m9nc\" (UID: \"81146ab4-4b36-4e19-b829-fdba5a1ec180\") " pod="kube-system/kube-proxy-4m9nc" Feb 13 20:05:32.343046 kubelet[2525]: E0213 20:05:32.342941 2525 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:05:32.343046 kubelet[2525]: E0213 20:05:32.342968 2525 projected.go:194] Error preparing data for projected volume kube-api-access-8drqn for pod kube-system/kube-proxy-4m9nc: configmap "kube-root-ca.crt" not found Feb 13 20:05:32.343046 kubelet[2525]: E0213 20:05:32.343021 2525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81146ab4-4b36-4e19-b829-fdba5a1ec180-kube-api-access-8drqn podName:81146ab4-4b36-4e19-b829-fdba5a1ec180 nodeName:}" failed. No retries permitted until 2025-02-13 20:05:32.843004997 +0000 UTC m=+5.902279151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8drqn" (UniqueName: "kubernetes.io/projected/81146ab4-4b36-4e19-b829-fdba5a1ec180-kube-api-access-8drqn") pod "kube-proxy-4m9nc" (UID: "81146ab4-4b36-4e19-b829-fdba5a1ec180") : configmap "kube-root-ca.crt" not found Feb 13 20:05:32.530411 kubelet[2525]: E0213 20:05:32.530336 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:33.043604 kubelet[2525]: E0213 20:05:33.043572 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:33.135754 kubelet[2525]: E0213 20:05:33.135352 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:33.136207 systemd[1]: Created slice kubepods-besteffort-pod3a7e7f90_87f7_4bda_ac11_50c42aa1b682.slice - libcontainer container kubepods-besteffort-pod3a7e7f90_87f7_4bda_ac11_50c42aa1b682.slice. Feb 13 20:05:33.140090 containerd[1476]: time="2025-02-13T20:05:33.139964728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4m9nc,Uid:81146ab4-4b36-4e19-b829-fdba5a1ec180,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:33.143587 kubelet[2525]: I0213 20:05:33.143554 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a7e7f90-87f7-4bda-ac11-50c42aa1b682-var-lib-calico\") pod \"tigera-operator-76c4976dd7-nbjtf\" (UID: \"3a7e7f90-87f7-4bda-ac11-50c42aa1b682\") " pod="tigera-operator/tigera-operator-76c4976dd7-nbjtf" Feb 13 20:05:33.143633 kubelet[2525]: I0213 20:05:33.143591 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gdp\" (UniqueName: \"kubernetes.io/projected/3a7e7f90-87f7-4bda-ac11-50c42aa1b682-kube-api-access-d7gdp\") pod \"tigera-operator-76c4976dd7-nbjtf\" (UID: \"3a7e7f90-87f7-4bda-ac11-50c42aa1b682\") " pod="tigera-operator/tigera-operator-76c4976dd7-nbjtf" Feb 13 20:05:33.162177 containerd[1476]: time="2025-02-13T20:05:33.161295063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:33.162177 containerd[1476]: time="2025-02-13T20:05:33.162017317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:33.162177 containerd[1476]: time="2025-02-13T20:05:33.162046541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:33.162177 containerd[1476]: time="2025-02-13T20:05:33.162140334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:33.175484 systemd[1]: run-containerd-runc-k8s.io-88b4f411dcd16f5019ad792068210f432891989fb61ec85a36edf5fa6ee87953-runc.pOCHxA.mount: Deactivated successfully. Feb 13 20:05:33.181219 kubelet[2525]: E0213 20:05:33.181187 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:33.184943 systemd[1]: Started cri-containerd-88b4f411dcd16f5019ad792068210f432891989fb61ec85a36edf5fa6ee87953.scope - libcontainer container 88b4f411dcd16f5019ad792068210f432891989fb61ec85a36edf5fa6ee87953. Feb 13 20:05:33.209277 containerd[1476]: time="2025-02-13T20:05:33.209236126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4m9nc,Uid:81146ab4-4b36-4e19-b829-fdba5a1ec180,Namespace:kube-system,Attempt:0,} returns sandbox id \"88b4f411dcd16f5019ad792068210f432891989fb61ec85a36edf5fa6ee87953\"" Feb 13 20:05:33.210085 kubelet[2525]: E0213 20:05:33.210050 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:33.212222 containerd[1476]: time="2025-02-13T20:05:33.212179688Z" level=info msg="CreateContainer within sandbox \"88b4f411dcd16f5019ad792068210f432891989fb61ec85a36edf5fa6ee87953\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:05:33.228759 containerd[1476]: time="2025-02-13T20:05:33.228719868Z" level=info msg="CreateContainer within sandbox \"88b4f411dcd16f5019ad792068210f432891989fb61ec85a36edf5fa6ee87953\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c478d134846e2919d2755bb5a2f064c183393938c3e4c8eaa39da1588486308\"" Feb 13 20:05:33.229369 containerd[1476]: time="2025-02-13T20:05:33.229321589Z" level=info msg="StartContainer for \"8c478d134846e2919d2755bb5a2f064c183393938c3e4c8eaa39da1588486308\"" Feb 13 20:05:33.260925 systemd[1]: Started cri-containerd-8c478d134846e2919d2755bb5a2f064c183393938c3e4c8eaa39da1588486308.scope - libcontainer container 8c478d134846e2919d2755bb5a2f064c183393938c3e4c8eaa39da1588486308. Feb 13 20:05:33.289289 containerd[1476]: time="2025-02-13T20:05:33.289243942Z" level=info msg="StartContainer for \"8c478d134846e2919d2755bb5a2f064c183393938c3e4c8eaa39da1588486308\" returns successfully" Feb 13 20:05:33.443267 containerd[1476]: time="2025-02-13T20:05:33.443134730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-nbjtf,Uid:3a7e7f90-87f7-4bda-ac11-50c42aa1b682,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:05:33.470843 containerd[1476]: time="2025-02-13T20:05:33.470119913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:33.470843 containerd[1476]: time="2025-02-13T20:05:33.470774409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:33.470843 containerd[1476]: time="2025-02-13T20:05:33.470800155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:33.470979 containerd[1476]: time="2025-02-13T20:05:33.470891985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:33.490920 systemd[1]: Started cri-containerd-b6c81584aff131866837cabf04439f0237c254e6ec723e1e9baf03ca56f6b485.scope - libcontainer container b6c81584aff131866837cabf04439f0237c254e6ec723e1e9baf03ca56f6b485. Feb 13 20:05:33.528178 containerd[1476]: time="2025-02-13T20:05:33.528143480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-nbjtf,Uid:3a7e7f90-87f7-4bda-ac11-50c42aa1b682,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b6c81584aff131866837cabf04439f0237c254e6ec723e1e9baf03ca56f6b485\"" Feb 13 20:05:33.530156 containerd[1476]: time="2025-02-13T20:05:33.530068282Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:05:34.047379 kubelet[2525]: E0213 20:05:34.047316 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:34.047379 kubelet[2525]: E0213 20:05:34.047355 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:34.055464 kubelet[2525]: I0213 20:05:34.055414 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4m9nc" podStartSLOduration=2.055394409 podStartE2EDuration="2.055394409s" podCreationTimestamp="2025-02-13 20:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:05:34.055207594 +0000 UTC m=+7.114481748" watchObservedRunningTime="2025-02-13 20:05:34.055394409 +0000 UTC m=+7.114668563" Feb 13 20:05:34.985956 update_engine[1465]: I20250213 20:05:34.985878 1465 update_attempter.cc:509] Updating boot flags... Feb 13 20:05:35.007885 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2876) Feb 13 20:05:35.042328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2877) Feb 13 20:05:35.071835 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2877) Feb 13 20:05:35.357920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773983411.mount: Deactivated successfully. Feb 13 20:05:35.639383 containerd[1476]: time="2025-02-13T20:05:35.639329027Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:35.640000 containerd[1476]: time="2025-02-13T20:05:35.639939929Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:05:35.640999 containerd[1476]: time="2025-02-13T20:05:35.640968991Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:35.643116 containerd[1476]: time="2025-02-13T20:05:35.643079015Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:35.643794 containerd[1476]: time="2025-02-13T20:05:35.643737571Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.113641348s" Feb 13 20:05:35.643827 containerd[1476]: time="2025-02-13T20:05:35.643805387Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:05:35.645820 containerd[1476]: time="2025-02-13T20:05:35.645760489Z" level=info msg="CreateContainer within sandbox \"b6c81584aff131866837cabf04439f0237c254e6ec723e1e9baf03ca56f6b485\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:05:35.655566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345436518.mount: Deactivated successfully. Feb 13 20:05:35.656684 containerd[1476]: time="2025-02-13T20:05:35.656639117Z" level=info msg="CreateContainer within sandbox \"b6c81584aff131866837cabf04439f0237c254e6ec723e1e9baf03ca56f6b485\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"94cbadfc8b7d58a380e2827b19905a061a150796bf3425a0eb7adc95c05ee20b\"" Feb 13 20:05:35.657169 containerd[1476]: time="2025-02-13T20:05:35.657114269Z" level=info msg="StartContainer for \"94cbadfc8b7d58a380e2827b19905a061a150796bf3425a0eb7adc95c05ee20b\"" Feb 13 20:05:35.684923 systemd[1]: Started cri-containerd-94cbadfc8b7d58a380e2827b19905a061a150796bf3425a0eb7adc95c05ee20b.scope - libcontainer container 94cbadfc8b7d58a380e2827b19905a061a150796bf3425a0eb7adc95c05ee20b. Feb 13 20:05:35.708030 containerd[1476]: time="2025-02-13T20:05:35.707946382Z" level=info msg="StartContainer for \"94cbadfc8b7d58a380e2827b19905a061a150796bf3425a0eb7adc95c05ee20b\" returns successfully" Feb 13 20:05:38.971711 kubelet[2525]: I0213 20:05:38.971633 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-nbjtf" podStartSLOduration=3.856555022 podStartE2EDuration="5.971617622s" podCreationTimestamp="2025-02-13 20:05:33 +0000 UTC" firstStartedPulling="2025-02-13 20:05:33.529491716 +0000 UTC m=+6.588765870" lastFinishedPulling="2025-02-13 20:05:35.644554326 +0000 UTC m=+8.703828470" observedRunningTime="2025-02-13 20:05:36.065133965 +0000 UTC m=+9.124408119" watchObservedRunningTime="2025-02-13 20:05:38.971617622 +0000 UTC m=+12.030891766" Feb 13 20:05:38.985561 kubelet[2525]: I0213 20:05:38.985513 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02be1e70-e3fb-4763-9e90-527e506e9cab-tigera-ca-bundle\") pod \"calico-typha-77f6ffbdf6-t55np\" (UID: \"02be1e70-e3fb-4763-9e90-527e506e9cab\") " pod="calico-system/calico-typha-77f6ffbdf6-t55np" Feb 13 20:05:38.985705 kubelet[2525]: I0213 20:05:38.985552 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqw5w\" (UniqueName: \"kubernetes.io/projected/02be1e70-e3fb-4763-9e90-527e506e9cab-kube-api-access-fqw5w\") pod \"calico-typha-77f6ffbdf6-t55np\" (UID: \"02be1e70-e3fb-4763-9e90-527e506e9cab\") " pod="calico-system/calico-typha-77f6ffbdf6-t55np" Feb 13 20:05:38.985705 kubelet[2525]: I0213 20:05:38.985613 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/02be1e70-e3fb-4763-9e90-527e506e9cab-typha-certs\") pod \"calico-typha-77f6ffbdf6-t55np\" (UID: \"02be1e70-e3fb-4763-9e90-527e506e9cab\") " pod="calico-system/calico-typha-77f6ffbdf6-t55np" Feb 13 20:05:38.987680 systemd[1]: Created slice kubepods-besteffort-pod02be1e70_e3fb_4763_9e90_527e506e9cab.slice - libcontainer container kubepods-besteffort-pod02be1e70_e3fb_4763_9e90_527e506e9cab.slice. Feb 13 20:05:39.026050 systemd[1]: Created slice kubepods-besteffort-pod040d4b9a_bcac_4bd6_8e73_7b5d9932ba9f.slice - libcontainer container kubepods-besteffort-pod040d4b9a_bcac_4bd6_8e73_7b5d9932ba9f.slice. Feb 13 20:05:39.086821 kubelet[2525]: I0213 20:05:39.086754 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-var-lib-calico\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.086821 kubelet[2525]: I0213 20:05:39.086812 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-lib-modules\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.086821 kubelet[2525]: I0213 20:05:39.086828 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-var-run-calico\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087090 kubelet[2525]: I0213 20:05:39.086918 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-policysync\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087090 kubelet[2525]: I0213 20:05:39.086973 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-node-certs\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087090 kubelet[2525]: I0213 20:05:39.086995 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-cni-log-dir\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087090 kubelet[2525]: I0213 20:05:39.087015 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-xtables-lock\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087090 kubelet[2525]: I0213 20:05:39.087032 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-tigera-ca-bundle\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087207 kubelet[2525]: I0213 20:05:39.087047 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-flexvol-driver-host\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087207 kubelet[2525]: I0213 20:05:39.087064 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9hfp\" (UniqueName: \"kubernetes.io/projected/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-kube-api-access-v9hfp\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087207 kubelet[2525]: I0213 20:05:39.087084 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-cni-bin-dir\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.087207 kubelet[2525]: I0213 20:05:39.087101 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f-cni-net-dir\") pod \"calico-node-6tvvn\" (UID: \"040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f\") " pod="calico-system/calico-node-6tvvn" Feb 13 20:05:39.164454 kubelet[2525]: E0213 20:05:39.164392 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:39.189815 kubelet[2525]: I0213 20:05:39.188285 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c798cf42-a2d5-48e9-9db3-eab6f1d0ef23-kubelet-dir\") pod \"csi-node-driver-rlkfd\" (UID: \"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23\") " pod="calico-system/csi-node-driver-rlkfd" Feb 13 20:05:39.189815 kubelet[2525]: I0213 20:05:39.188332 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c798cf42-a2d5-48e9-9db3-eab6f1d0ef23-socket-dir\") pod \"csi-node-driver-rlkfd\" (UID: \"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23\") " pod="calico-system/csi-node-driver-rlkfd" Feb 13 20:05:39.189815 kubelet[2525]: I0213 20:05:39.188370 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c798cf42-a2d5-48e9-9db3-eab6f1d0ef23-varrun\") pod \"csi-node-driver-rlkfd\" (UID: \"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23\") " pod="calico-system/csi-node-driver-rlkfd" Feb 13 20:05:39.189815 kubelet[2525]: I0213 20:05:39.188388 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c798cf42-a2d5-48e9-9db3-eab6f1d0ef23-registration-dir\") pod \"csi-node-driver-rlkfd\" (UID: \"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23\") " pod="calico-system/csi-node-driver-rlkfd" Feb 13 20:05:39.189815 kubelet[2525]: I0213 20:05:39.188407 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz6ns\" (UniqueName: \"kubernetes.io/projected/c798cf42-a2d5-48e9-9db3-eab6f1d0ef23-kube-api-access-tz6ns\") pod \"csi-node-driver-rlkfd\" (UID: \"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23\") " pod="calico-system/csi-node-driver-rlkfd" Feb 13 20:05:39.190561 kubelet[2525]: E0213 20:05:39.190520 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.190561 kubelet[2525]: W0213 20:05:39.190555 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.190636 kubelet[2525]: E0213 20:05:39.190589 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.191520 kubelet[2525]: E0213 20:05:39.191485 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.191520 kubelet[2525]: W0213 20:05:39.191513 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.191595 kubelet[2525]: E0213 20:05:39.191526 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.194101 kubelet[2525]: E0213 20:05:39.194070 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.194101 kubelet[2525]: W0213 20:05:39.194093 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.194176 kubelet[2525]: E0213 20:05:39.194108 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.201510 kubelet[2525]: E0213 20:05:39.201480 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.201510 kubelet[2525]: W0213 20:05:39.201502 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.201593 kubelet[2525]: E0213 20:05:39.201523 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.289272 kubelet[2525]: E0213 20:05:39.289178 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.289272 kubelet[2525]: W0213 20:05:39.289200 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.289272 kubelet[2525]: E0213 20:05:39.289221 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.289531 kubelet[2525]: E0213 20:05:39.289504 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.289531 kubelet[2525]: W0213 20:05:39.289518 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.289531 kubelet[2525]: E0213 20:05:39.289532 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.290013 kubelet[2525]: E0213 20:05:39.289970 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.290013 kubelet[2525]: W0213 20:05:39.289992 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.290111 kubelet[2525]: E0213 20:05:39.290022 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.290398 kubelet[2525]: E0213 20:05:39.290371 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.290398 kubelet[2525]: W0213 20:05:39.290396 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.290481 kubelet[2525]: E0213 20:05:39.290418 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.290508 kubelet[2525]: E0213 20:05:39.290384 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:39.290831 kubelet[2525]: E0213 20:05:39.290759 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.290831 kubelet[2525]: W0213 20:05:39.290775 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.290831 kubelet[2525]: E0213 20:05:39.290803 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.291028 kubelet[2525]: E0213 20:05:39.291009 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.291028 kubelet[2525]: W0213 20:05:39.291022 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.291082 kubelet[2525]: E0213 20:05:39.291038 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.291105 containerd[1476]: time="2025-02-13T20:05:39.291007287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f6ffbdf6-t55np,Uid:02be1e70-e3fb-4763-9e90-527e506e9cab,Namespace:calico-system,Attempt:0,}" Feb 13 20:05:39.291564 kubelet[2525]: E0213 20:05:39.291199 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.291564 kubelet[2525]: W0213 20:05:39.291207 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.291564 kubelet[2525]: E0213 20:05:39.291215 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.291564 kubelet[2525]: E0213 20:05:39.291367 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.291564 kubelet[2525]: W0213 20:05:39.291374 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.291564 kubelet[2525]: E0213 20:05:39.291439 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.291564 kubelet[2525]: E0213 20:05:39.291548 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.291564 kubelet[2525]: W0213 20:05:39.291558 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.291812 kubelet[2525]: E0213 20:05:39.291612 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.291844 kubelet[2525]: E0213 20:05:39.291837 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.291866 kubelet[2525]: W0213 20:05:39.291849 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.291942 kubelet[2525]: E0213 20:05:39.291913 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.292102 kubelet[2525]: E0213 20:05:39.292085 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.292102 kubelet[2525]: W0213 20:05:39.292098 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.292186 kubelet[2525]: E0213 20:05:39.292129 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.292330 kubelet[2525]: E0213 20:05:39.292306 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.292330 kubelet[2525]: W0213 20:05:39.292320 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.292469 kubelet[2525]: E0213 20:05:39.292355 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.292560 kubelet[2525]: E0213 20:05:39.292546 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.292560 kubelet[2525]: W0213 20:05:39.292557 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.292610 kubelet[2525]: E0213 20:05:39.292572 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.292817 kubelet[2525]: E0213 20:05:39.292774 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.292817 kubelet[2525]: W0213 20:05:39.292810 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.292898 kubelet[2525]: E0213 20:05:39.292831 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.293091 kubelet[2525]: E0213 20:05:39.293075 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.293091 kubelet[2525]: W0213 20:05:39.293088 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.293154 kubelet[2525]: E0213 20:05:39.293104 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.293387 kubelet[2525]: E0213 20:05:39.293370 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.293387 kubelet[2525]: W0213 20:05:39.293383 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.293558 kubelet[2525]: E0213 20:05:39.293481 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.293592 kubelet[2525]: E0213 20:05:39.293581 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.293592 kubelet[2525]: W0213 20:05:39.293589 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.293643 kubelet[2525]: E0213 20:05:39.293627 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.293828 kubelet[2525]: E0213 20:05:39.293812 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.293828 kubelet[2525]: W0213 20:05:39.293823 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.293882 kubelet[2525]: E0213 20:05:39.293847 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.294083 kubelet[2525]: E0213 20:05:39.294043 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.294083 kubelet[2525]: W0213 20:05:39.294069 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.294173 kubelet[2525]: E0213 20:05:39.294105 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.294301 kubelet[2525]: E0213 20:05:39.294273 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.294301 kubelet[2525]: W0213 20:05:39.294286 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.294369 kubelet[2525]: E0213 20:05:39.294305 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.294529 kubelet[2525]: E0213 20:05:39.294515 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.294529 kubelet[2525]: W0213 20:05:39.294526 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.294642 kubelet[2525]: E0213 20:05:39.294539 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.294873 kubelet[2525]: E0213 20:05:39.294853 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.294873 kubelet[2525]: W0213 20:05:39.294868 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.294938 kubelet[2525]: E0213 20:05:39.294885 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.295143 kubelet[2525]: E0213 20:05:39.295128 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.295143 kubelet[2525]: W0213 20:05:39.295140 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.295204 kubelet[2525]: E0213 20:05:39.295158 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.295554 kubelet[2525]: E0213 20:05:39.295536 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.295554 kubelet[2525]: W0213 20:05:39.295548 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.295610 kubelet[2525]: E0213 20:05:39.295594 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.295882 kubelet[2525]: E0213 20:05:39.295860 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.295882 kubelet[2525]: W0213 20:05:39.295874 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.295961 kubelet[2525]: E0213 20:05:39.295887 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.301717 kubelet[2525]: E0213 20:05:39.301698 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.301717 kubelet[2525]: W0213 20:05:39.301713 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.301855 kubelet[2525]: E0213 20:05:39.301724 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.310827 kubelet[2525]: E0213 20:05:39.310233 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:39.314278 containerd[1476]: time="2025-02-13T20:05:39.314151740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:39.314427 containerd[1476]: time="2025-02-13T20:05:39.314362332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:39.314427 containerd[1476]: time="2025-02-13T20:05:39.314384790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:39.315585 containerd[1476]: time="2025-02-13T20:05:39.315489169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:39.329807 kubelet[2525]: E0213 20:05:39.329751 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:39.330597 containerd[1476]: time="2025-02-13T20:05:39.330467185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6tvvn,Uid:040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f,Namespace:calico-system,Attempt:0,}" Feb 13 20:05:39.338338 systemd[1]: Started cri-containerd-2dbeb1d860760987e706e08868a047cd4df943f390a89f081c4cf4bb37ca8b72.scope - libcontainer container 2dbeb1d860760987e706e08868a047cd4df943f390a89f081c4cf4bb37ca8b72. Feb 13 20:05:39.353809 containerd[1476]: time="2025-02-13T20:05:39.353642773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:05:39.353809 containerd[1476]: time="2025-02-13T20:05:39.353723582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:05:39.353809 containerd[1476]: time="2025-02-13T20:05:39.353738965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:39.354023 containerd[1476]: time="2025-02-13T20:05:39.353850158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:05:39.371970 systemd[1]: Started cri-containerd-46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0.scope - libcontainer container 46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0. Feb 13 20:05:39.376418 containerd[1476]: time="2025-02-13T20:05:39.376389199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f6ffbdf6-t55np,Uid:02be1e70-e3fb-4763-9e90-527e506e9cab,Namespace:calico-system,Attempt:0,} returns sandbox id \"2dbeb1d860760987e706e08868a047cd4df943f390a89f081c4cf4bb37ca8b72\"" Feb 13 20:05:39.376999 kubelet[2525]: E0213 20:05:39.376978 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:39.378098 containerd[1476]: time="2025-02-13T20:05:39.377983187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:05:39.387656 kubelet[2525]: E0213 20:05:39.387499 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.387656 kubelet[2525]: W0213 20:05:39.387523 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.387656 kubelet[2525]: E0213 20:05:39.387548 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.387865 kubelet[2525]: E0213 20:05:39.387839 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.387897 kubelet[2525]: W0213 20:05:39.387864 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.387928 kubelet[2525]: E0213 20:05:39.387914 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.388411 kubelet[2525]: E0213 20:05:39.388386 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.388411 kubelet[2525]: W0213 20:05:39.388400 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.388411 kubelet[2525]: E0213 20:05:39.388409 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.388752 kubelet[2525]: E0213 20:05:39.388689 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.388752 kubelet[2525]: W0213 20:05:39.388706 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.388752 kubelet[2525]: E0213 20:05:39.388728 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.389019 kubelet[2525]: E0213 20:05:39.388995 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.389019 kubelet[2525]: W0213 20:05:39.389006 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.389019 kubelet[2525]: E0213 20:05:39.389016 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.389843 kubelet[2525]: E0213 20:05:39.389804 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.389843 kubelet[2525]: W0213 20:05:39.389829 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.389843 kubelet[2525]: E0213 20:05:39.389837 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.390318 kubelet[2525]: E0213 20:05:39.390285 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.390318 kubelet[2525]: W0213 20:05:39.390298 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.390318 kubelet[2525]: E0213 20:05:39.390307 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.390566 kubelet[2525]: E0213 20:05:39.390542 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.390566 kubelet[2525]: W0213 20:05:39.390556 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.390566 kubelet[2525]: E0213 20:05:39.390565 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.390926 kubelet[2525]: E0213 20:05:39.390812 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.390926 kubelet[2525]: W0213 20:05:39.390825 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.390926 kubelet[2525]: E0213 20:05:39.390835 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.391066 kubelet[2525]: E0213 20:05:39.391050 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.391066 kubelet[2525]: W0213 20:05:39.391063 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.391130 kubelet[2525]: E0213 20:05:39.391072 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.391348 kubelet[2525]: E0213 20:05:39.391334 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.391348 kubelet[2525]: W0213 20:05:39.391345 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.391409 kubelet[2525]: E0213 20:05:39.391355 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.391761 kubelet[2525]: E0213 20:05:39.391701 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.391761 kubelet[2525]: W0213 20:05:39.391717 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.391761 kubelet[2525]: E0213 20:05:39.391726 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.392871 kubelet[2525]: E0213 20:05:39.392851 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.392871 kubelet[2525]: W0213 20:05:39.392866 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.392937 kubelet[2525]: E0213 20:05:39.392875 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.393157 kubelet[2525]: E0213 20:05:39.393122 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.393157 kubelet[2525]: W0213 20:05:39.393133 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.393157 kubelet[2525]: E0213 20:05:39.393142 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.393359 kubelet[2525]: E0213 20:05:39.393342 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:39.393359 kubelet[2525]: W0213 20:05:39.393354 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:39.393412 kubelet[2525]: E0213 20:05:39.393362 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:39.401016 containerd[1476]: time="2025-02-13T20:05:39.400921045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6tvvn,Uid:040d4b9a-bcac-4bd6-8e73-7b5d9932ba9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0\"" Feb 13 20:05:39.401640 kubelet[2525]: E0213 20:05:39.401609 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:40.064360 kubelet[2525]: E0213 20:05:40.064332 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:40.098444 kubelet[2525]: E0213 20:05:40.098409 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.098444 kubelet[2525]: W0213 20:05:40.098434 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.098571 kubelet[2525]: E0213 20:05:40.098457 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.098725 kubelet[2525]: E0213 20:05:40.098710 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.098725 kubelet[2525]: W0213 20:05:40.098722 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.098804 kubelet[2525]: E0213 20:05:40.098730 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.098992 kubelet[2525]: E0213 20:05:40.098976 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.098992 kubelet[2525]: W0213 20:05:40.098990 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.099058 kubelet[2525]: E0213 20:05:40.099001 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.099244 kubelet[2525]: E0213 20:05:40.099229 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.099244 kubelet[2525]: W0213 20:05:40.099241 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.099305 kubelet[2525]: E0213 20:05:40.099248 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.099488 kubelet[2525]: E0213 20:05:40.099474 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.099488 kubelet[2525]: W0213 20:05:40.099486 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.099538 kubelet[2525]: E0213 20:05:40.099493 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.099692 kubelet[2525]: E0213 20:05:40.099678 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.099692 kubelet[2525]: W0213 20:05:40.099689 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.099742 kubelet[2525]: E0213 20:05:40.099696 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.099923 kubelet[2525]: E0213 20:05:40.099909 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.099923 kubelet[2525]: W0213 20:05:40.099920 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.099989 kubelet[2525]: E0213 20:05:40.099929 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.100260 kubelet[2525]: E0213 20:05:40.100223 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.100260 kubelet[2525]: W0213 20:05:40.100246 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.100432 kubelet[2525]: E0213 20:05:40.100270 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.100581 kubelet[2525]: E0213 20:05:40.100563 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.100581 kubelet[2525]: W0213 20:05:40.100575 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.100581 kubelet[2525]: E0213 20:05:40.100583 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.100810 kubelet[2525]: E0213 20:05:40.100794 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.100810 kubelet[2525]: W0213 20:05:40.100805 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.100895 kubelet[2525]: E0213 20:05:40.100814 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.101035 kubelet[2525]: E0213 20:05:40.101019 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.101035 kubelet[2525]: W0213 20:05:40.101029 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.101088 kubelet[2525]: E0213 20:05:40.101036 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.101259 kubelet[2525]: E0213 20:05:40.101231 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.101259 kubelet[2525]: W0213 20:05:40.101243 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.101259 kubelet[2525]: E0213 20:05:40.101250 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.101541 kubelet[2525]: E0213 20:05:40.101454 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.101541 kubelet[2525]: W0213 20:05:40.101461 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.101541 kubelet[2525]: E0213 20:05:40.101469 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.101671 kubelet[2525]: E0213 20:05:40.101654 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.101671 kubelet[2525]: W0213 20:05:40.101664 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.101671 kubelet[2525]: E0213 20:05:40.101672 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:40.101897 kubelet[2525]: E0213 20:05:40.101879 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:40.101897 kubelet[2525]: W0213 20:05:40.101890 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:40.101897 kubelet[2525]: E0213 20:05:40.101898 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:41.026861 kubelet[2525]: E0213 20:05:41.026816 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:41.652257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909868490.mount: Deactivated successfully. Feb 13 20:05:41.910143 containerd[1476]: time="2025-02-13T20:05:41.910010808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:41.911192 containerd[1476]: time="2025-02-13T20:05:41.911146432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:05:41.912366 containerd[1476]: time="2025-02-13T20:05:41.912315525Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:41.914205 containerd[1476]: time="2025-02-13T20:05:41.914173341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:41.914750 containerd[1476]: time="2025-02-13T20:05:41.914724818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.536714895s" Feb 13 20:05:41.914809 containerd[1476]: time="2025-02-13T20:05:41.914752376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:05:41.915734 containerd[1476]: time="2025-02-13T20:05:41.915619249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:05:41.921535 containerd[1476]: time="2025-02-13T20:05:41.921479856Z" level=info msg="CreateContainer within sandbox \"2dbeb1d860760987e706e08868a047cd4df943f390a89f081c4cf4bb37ca8b72\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:05:41.935770 containerd[1476]: time="2025-02-13T20:05:41.935730881Z" level=info msg="CreateContainer within sandbox \"2dbeb1d860760987e706e08868a047cd4df943f390a89f081c4cf4bb37ca8b72\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"032a452185811dd3ced9cf199cc9dc65f05363d10c451e6ea3f2e1a4f0c2e746\"" Feb 13 20:05:41.936243 containerd[1476]: time="2025-02-13T20:05:41.936210118Z" level=info msg="StartContainer for \"032a452185811dd3ced9cf199cc9dc65f05363d10c451e6ea3f2e1a4f0c2e746\"" Feb 13 20:05:41.964916 systemd[1]: Started cri-containerd-032a452185811dd3ced9cf199cc9dc65f05363d10c451e6ea3f2e1a4f0c2e746.scope - libcontainer container 032a452185811dd3ced9cf199cc9dc65f05363d10c451e6ea3f2e1a4f0c2e746. Feb 13 20:05:42.001931 containerd[1476]: time="2025-02-13T20:05:42.001887819Z" level=info msg="StartContainer for \"032a452185811dd3ced9cf199cc9dc65f05363d10c451e6ea3f2e1a4f0c2e746\" returns successfully" Feb 13 20:05:42.071213 kubelet[2525]: E0213 20:05:42.071170 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:42.079267 kubelet[2525]: I0213 20:05:42.079095 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77f6ffbdf6-t55np" podStartSLOduration=1.541302805 podStartE2EDuration="4.079079916s" podCreationTimestamp="2025-02-13 20:05:38 +0000 UTC" firstStartedPulling="2025-02-13 20:05:39.377712048 +0000 UTC m=+12.436986202" lastFinishedPulling="2025-02-13 20:05:41.915489149 +0000 UTC m=+14.974763313" observedRunningTime="2025-02-13 20:05:42.078639875 +0000 UTC m=+15.137914019" watchObservedRunningTime="2025-02-13 20:05:42.079079916 +0000 UTC m=+15.138354070" Feb 13 20:05:42.116582 kubelet[2525]: E0213 20:05:42.116548 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.116582 kubelet[2525]: W0213 20:05:42.116567 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.116675 kubelet[2525]: E0213 20:05:42.116587 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.116851 kubelet[2525]: E0213 20:05:42.116834 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.116851 kubelet[2525]: W0213 20:05:42.116845 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.116851 kubelet[2525]: E0213 20:05:42.116853 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.117090 kubelet[2525]: E0213 20:05:42.117064 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.117090 kubelet[2525]: W0213 20:05:42.117076 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.117090 kubelet[2525]: E0213 20:05:42.117084 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.117287 kubelet[2525]: E0213 20:05:42.117270 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.117287 kubelet[2525]: W0213 20:05:42.117282 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.117341 kubelet[2525]: E0213 20:05:42.117289 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.117633 kubelet[2525]: E0213 20:05:42.117614 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.117667 kubelet[2525]: W0213 20:05:42.117632 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.117667 kubelet[2525]: E0213 20:05:42.117661 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.117862 kubelet[2525]: E0213 20:05:42.117851 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.117890 kubelet[2525]: W0213 20:05:42.117864 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.117890 kubelet[2525]: E0213 20:05:42.117872 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.118061 kubelet[2525]: E0213 20:05:42.118049 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.118061 kubelet[2525]: W0213 20:05:42.118057 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.118117 kubelet[2525]: E0213 20:05:42.118065 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.118260 kubelet[2525]: E0213 20:05:42.118250 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.118260 kubelet[2525]: W0213 20:05:42.118258 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.118302 kubelet[2525]: E0213 20:05:42.118266 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.118475 kubelet[2525]: E0213 20:05:42.118464 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.118475 kubelet[2525]: W0213 20:05:42.118472 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.118528 kubelet[2525]: E0213 20:05:42.118479 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.118666 kubelet[2525]: E0213 20:05:42.118656 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.118666 kubelet[2525]: W0213 20:05:42.118664 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.118711 kubelet[2525]: E0213 20:05:42.118672 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.118935 kubelet[2525]: E0213 20:05:42.118923 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.118935 kubelet[2525]: W0213 20:05:42.118932 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.118999 kubelet[2525]: E0213 20:05:42.118940 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.119150 kubelet[2525]: E0213 20:05:42.119139 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.119150 kubelet[2525]: W0213 20:05:42.119148 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.119195 kubelet[2525]: E0213 20:05:42.119155 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.119338 kubelet[2525]: E0213 20:05:42.119326 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.119338 kubelet[2525]: W0213 20:05:42.119334 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.119402 kubelet[2525]: E0213 20:05:42.119341 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.119526 kubelet[2525]: E0213 20:05:42.119515 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.119526 kubelet[2525]: W0213 20:05:42.119524 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.119594 kubelet[2525]: E0213 20:05:42.119531 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.119709 kubelet[2525]: E0213 20:05:42.119698 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.119709 kubelet[2525]: W0213 20:05:42.119707 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.119761 kubelet[2525]: E0213 20:05:42.119715 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.119973 kubelet[2525]: E0213 20:05:42.119961 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.119973 kubelet[2525]: W0213 20:05:42.119970 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.120039 kubelet[2525]: E0213 20:05:42.119977 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.120198 kubelet[2525]: E0213 20:05:42.120187 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.120198 kubelet[2525]: W0213 20:05:42.120195 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.120249 kubelet[2525]: E0213 20:05:42.120209 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.120386 kubelet[2525]: E0213 20:05:42.120374 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.120386 kubelet[2525]: W0213 20:05:42.120383 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.120438 kubelet[2525]: E0213 20:05:42.120394 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.120623 kubelet[2525]: E0213 20:05:42.120611 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.120623 kubelet[2525]: W0213 20:05:42.120620 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.120675 kubelet[2525]: E0213 20:05:42.120633 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.120842 kubelet[2525]: E0213 20:05:42.120830 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.120842 kubelet[2525]: W0213 20:05:42.120839 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.120894 kubelet[2525]: E0213 20:05:42.120853 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.121056 kubelet[2525]: E0213 20:05:42.121045 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.121056 kubelet[2525]: W0213 20:05:42.121053 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.121109 kubelet[2525]: E0213 20:05:42.121066 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.121344 kubelet[2525]: E0213 20:05:42.121321 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.121344 kubelet[2525]: W0213 20:05:42.121341 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.121404 kubelet[2525]: E0213 20:05:42.121366 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.121589 kubelet[2525]: E0213 20:05:42.121573 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.121589 kubelet[2525]: W0213 20:05:42.121583 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.121653 kubelet[2525]: E0213 20:05:42.121612 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.121856 kubelet[2525]: E0213 20:05:42.121841 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.121856 kubelet[2525]: W0213 20:05:42.121853 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.121908 kubelet[2525]: E0213 20:05:42.121878 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.122077 kubelet[2525]: E0213 20:05:42.122062 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.122077 kubelet[2525]: W0213 20:05:42.122072 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.122134 kubelet[2525]: E0213 20:05:42.122087 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.122371 kubelet[2525]: E0213 20:05:42.122347 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.122371 kubelet[2525]: W0213 20:05:42.122361 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.122412 kubelet[2525]: E0213 20:05:42.122375 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.122617 kubelet[2525]: E0213 20:05:42.122597 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.122617 kubelet[2525]: W0213 20:05:42.122608 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.122674 kubelet[2525]: E0213 20:05:42.122621 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.123445 kubelet[2525]: E0213 20:05:42.122829 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.123445 kubelet[2525]: W0213 20:05:42.123068 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.123445 kubelet[2525]: E0213 20:05:42.123085 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.123445 kubelet[2525]: E0213 20:05:42.123302 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.123445 kubelet[2525]: W0213 20:05:42.123310 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.123445 kubelet[2525]: E0213 20:05:42.123319 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.123858 kubelet[2525]: E0213 20:05:42.123591 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.123858 kubelet[2525]: W0213 20:05:42.123602 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.123858 kubelet[2525]: E0213 20:05:42.123614 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.124124 kubelet[2525]: E0213 20:05:42.124106 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.124124 kubelet[2525]: W0213 20:05:42.124119 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.124193 kubelet[2525]: E0213 20:05:42.124129 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.125327 kubelet[2525]: E0213 20:05:42.125311 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.125526 kubelet[2525]: W0213 20:05:42.125387 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.125526 kubelet[2525]: E0213 20:05:42.125404 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:42.125703 kubelet[2525]: E0213 20:05:42.125692 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:42.125769 kubelet[2525]: W0213 20:05:42.125757 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:42.127097 kubelet[2525]: E0213 20:05:42.125860 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.027730 kubelet[2525]: E0213 20:05:43.027672 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:43.105733 kubelet[2525]: I0213 20:05:43.105696 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:05:43.106137 kubelet[2525]: E0213 20:05:43.106088 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:43.127753 kubelet[2525]: E0213 20:05:43.127715 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.127753 kubelet[2525]: W0213 20:05:43.127736 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.127753 kubelet[2525]: E0213 20:05:43.127754 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.127967 kubelet[2525]: E0213 20:05:43.127953 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.127967 kubelet[2525]: W0213 20:05:43.127961 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.128012 kubelet[2525]: E0213 20:05:43.127969 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.128169 kubelet[2525]: E0213 20:05:43.128143 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.128169 kubelet[2525]: W0213 20:05:43.128154 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.128231 kubelet[2525]: E0213 20:05:43.128173 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.128348 kubelet[2525]: E0213 20:05:43.128332 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.128348 kubelet[2525]: W0213 20:05:43.128341 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.128398 kubelet[2525]: E0213 20:05:43.128350 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.128521 kubelet[2525]: E0213 20:05:43.128505 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.128521 kubelet[2525]: W0213 20:05:43.128515 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.128575 kubelet[2525]: E0213 20:05:43.128522 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.128680 kubelet[2525]: E0213 20:05:43.128665 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.128680 kubelet[2525]: W0213 20:05:43.128674 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.128725 kubelet[2525]: E0213 20:05:43.128681 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.128853 kubelet[2525]: E0213 20:05:43.128837 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.128853 kubelet[2525]: W0213 20:05:43.128847 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.128912 kubelet[2525]: E0213 20:05:43.128854 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.129018 kubelet[2525]: E0213 20:05:43.129001 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.129018 kubelet[2525]: W0213 20:05:43.129012 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.129068 kubelet[2525]: E0213 20:05:43.129022 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.129295 kubelet[2525]: E0213 20:05:43.129280 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.129295 kubelet[2525]: W0213 20:05:43.129290 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.129354 kubelet[2525]: E0213 20:05:43.129298 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.129556 kubelet[2525]: E0213 20:05:43.129517 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.129556 kubelet[2525]: W0213 20:05:43.129538 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.129556 kubelet[2525]: E0213 20:05:43.129559 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.129902 kubelet[2525]: E0213 20:05:43.129866 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.129902 kubelet[2525]: W0213 20:05:43.129875 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.129902 kubelet[2525]: E0213 20:05:43.129883 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.130092 kubelet[2525]: E0213 20:05:43.130070 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.130092 kubelet[2525]: W0213 20:05:43.130081 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.130092 kubelet[2525]: E0213 20:05:43.130088 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.130298 kubelet[2525]: E0213 20:05:43.130285 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.130298 kubelet[2525]: W0213 20:05:43.130295 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.130414 kubelet[2525]: E0213 20:05:43.130303 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.130484 kubelet[2525]: E0213 20:05:43.130470 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.130484 kubelet[2525]: W0213 20:05:43.130480 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.130527 kubelet[2525]: E0213 20:05:43.130487 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.130716 kubelet[2525]: E0213 20:05:43.130701 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.130716 kubelet[2525]: W0213 20:05:43.130711 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.130716 kubelet[2525]: E0213 20:05:43.130719 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.228197 kubelet[2525]: E0213 20:05:43.228146 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.228197 kubelet[2525]: W0213 20:05:43.228174 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.228197 kubelet[2525]: E0213 20:05:43.228207 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.228559 kubelet[2525]: E0213 20:05:43.228522 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.228559 kubelet[2525]: W0213 20:05:43.228536 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.228559 kubelet[2525]: E0213 20:05:43.228554 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.228847 kubelet[2525]: E0213 20:05:43.228820 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.228847 kubelet[2525]: W0213 20:05:43.228839 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.228942 kubelet[2525]: E0213 20:05:43.228856 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.229243 kubelet[2525]: E0213 20:05:43.229208 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.229243 kubelet[2525]: W0213 20:05:43.229237 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.229304 kubelet[2525]: E0213 20:05:43.229269 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.229555 kubelet[2525]: E0213 20:05:43.229531 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.229555 kubelet[2525]: W0213 20:05:43.229548 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.229604 kubelet[2525]: E0213 20:05:43.229564 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.229849 kubelet[2525]: E0213 20:05:43.229826 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.229849 kubelet[2525]: W0213 20:05:43.229837 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.229849 kubelet[2525]: E0213 20:05:43.229850 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.230056 kubelet[2525]: E0213 20:05:43.230046 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.230056 kubelet[2525]: W0213 20:05:43.230053 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.230139 kubelet[2525]: E0213 20:05:43.230110 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.230461 kubelet[2525]: E0213 20:05:43.230289 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.230461 kubelet[2525]: W0213 20:05:43.230302 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.230461 kubelet[2525]: E0213 20:05:43.230363 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.230553 kubelet[2525]: E0213 20:05:43.230538 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.230553 kubelet[2525]: W0213 20:05:43.230545 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.230594 kubelet[2525]: E0213 20:05:43.230556 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.230868 kubelet[2525]: E0213 20:05:43.230837 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.230940 kubelet[2525]: W0213 20:05:43.230865 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.230940 kubelet[2525]: E0213 20:05:43.230899 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.231283 kubelet[2525]: E0213 20:05:43.231262 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.231283 kubelet[2525]: W0213 20:05:43.231279 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.231374 kubelet[2525]: E0213 20:05:43.231304 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.231672 kubelet[2525]: E0213 20:05:43.231643 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.231672 kubelet[2525]: W0213 20:05:43.231668 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.231752 kubelet[2525]: E0213 20:05:43.231703 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.231980 kubelet[2525]: E0213 20:05:43.231957 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.232010 kubelet[2525]: W0213 20:05:43.231979 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.232105 kubelet[2525]: E0213 20:05:43.232075 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.232335 kubelet[2525]: E0213 20:05:43.232317 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.232335 kubelet[2525]: W0213 20:05:43.232331 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.232436 kubelet[2525]: E0213 20:05:43.232350 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.232707 kubelet[2525]: E0213 20:05:43.232692 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.232707 kubelet[2525]: W0213 20:05:43.232704 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.232758 kubelet[2525]: E0213 20:05:43.232721 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.233037 kubelet[2525]: E0213 20:05:43.233019 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.233088 kubelet[2525]: W0213 20:05:43.233035 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.233088 kubelet[2525]: E0213 20:05:43.233052 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.233305 kubelet[2525]: E0213 20:05:43.233285 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.233305 kubelet[2525]: W0213 20:05:43.233297 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.233305 kubelet[2525]: E0213 20:05:43.233306 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.233749 kubelet[2525]: E0213 20:05:43.233718 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:05:43.233749 kubelet[2525]: W0213 20:05:43.233728 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:05:43.233749 kubelet[2525]: E0213 20:05:43.233738 2525 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:05:43.880989 containerd[1476]: time="2025-02-13T20:05:43.880945542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:43.881806 containerd[1476]: time="2025-02-13T20:05:43.881754953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:05:43.882861 containerd[1476]: time="2025-02-13T20:05:43.882830041Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:43.884935 containerd[1476]: time="2025-02-13T20:05:43.884867595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:43.885420 containerd[1476]: time="2025-02-13T20:05:43.885373249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.969717744s" Feb 13 20:05:43.885420 containerd[1476]: time="2025-02-13T20:05:43.885413612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:05:43.887201 containerd[1476]: time="2025-02-13T20:05:43.887163803Z" level=info msg="CreateContainer within sandbox \"46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:05:43.900958 containerd[1476]: time="2025-02-13T20:05:43.900911171Z" level=info msg="CreateContainer within sandbox \"46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc\"" Feb 13 20:05:43.901368 containerd[1476]: time="2025-02-13T20:05:43.901336319Z" level=info msg="StartContainer for \"39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc\"" Feb 13 20:05:43.933925 systemd[1]: Started cri-containerd-39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc.scope - libcontainer container 39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc. Feb 13 20:05:43.961739 containerd[1476]: time="2025-02-13T20:05:43.961707244Z" level=info msg="StartContainer for \"39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc\" returns successfully" Feb 13 20:05:43.989368 systemd[1]: cri-containerd-39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc.scope: Deactivated successfully. Feb 13 20:05:44.409628 kubelet[2525]: E0213 20:05:44.409593 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:44.623103 containerd[1476]: time="2025-02-13T20:05:44.623041757Z" level=info msg="shim disconnected" id=39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc namespace=k8s.io Feb 13 20:05:44.623103 containerd[1476]: time="2025-02-13T20:05:44.623096861Z" level=warning msg="cleaning up after shim disconnected" id=39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc namespace=k8s.io Feb 13 20:05:44.623103 containerd[1476]: time="2025-02-13T20:05:44.623105618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:05:44.896239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39f9625e76208c6af7055c4a9f54fa70673d327dfc52d0e87ddb64f9b6ac71dc-rootfs.mount: Deactivated successfully. Feb 13 20:05:45.027397 kubelet[2525]: E0213 20:05:45.027353 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:45.075935 kubelet[2525]: E0213 20:05:45.075907 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:45.076609 containerd[1476]: time="2025-02-13T20:05:45.076407315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:05:46.129497 kubelet[2525]: I0213 20:05:46.129453 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:05:46.129902 kubelet[2525]: E0213 20:05:46.129755 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:47.027375 kubelet[2525]: E0213 20:05:47.027335 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:47.078169 kubelet[2525]: E0213 20:05:47.078128 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:49.026778 kubelet[2525]: E0213 20:05:49.026696 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:50.221333 containerd[1476]: time="2025-02-13T20:05:50.221282613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:50.222024 containerd[1476]: time="2025-02-13T20:05:50.221981082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:05:50.223110 containerd[1476]: time="2025-02-13T20:05:50.223082534Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:50.225157 containerd[1476]: time="2025-02-13T20:05:50.225118662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:50.225772 containerd[1476]: time="2025-02-13T20:05:50.225741668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.149291726s" Feb 13 20:05:50.225817 containerd[1476]: time="2025-02-13T20:05:50.225770406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:05:50.227593 containerd[1476]: time="2025-02-13T20:05:50.227553042Z" level=info msg="CreateContainer within sandbox \"46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:05:50.240481 containerd[1476]: time="2025-02-13T20:05:50.240434841Z" level=info msg="CreateContainer within sandbox \"46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6\"" Feb 13 20:05:50.240892 containerd[1476]: time="2025-02-13T20:05:50.240858956Z" level=info msg="StartContainer for \"7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6\"" Feb 13 20:05:50.287921 systemd[1]: Started cri-containerd-7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6.scope - libcontainer container 7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6. Feb 13 20:05:50.315847 containerd[1476]: time="2025-02-13T20:05:50.315752771Z" level=info msg="StartContainer for \"7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6\" returns successfully" Feb 13 20:05:51.026592 kubelet[2525]: E0213 20:05:51.026548 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:51.225918 kubelet[2525]: E0213 20:05:51.225871 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:51.617406 systemd[1]: cri-containerd-7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6.scope: Deactivated successfully. Feb 13 20:05:51.637699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6-rootfs.mount: Deactivated successfully. Feb 13 20:05:51.683630 kubelet[2525]: I0213 20:05:51.683590 2525 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:05:51.793748 containerd[1476]: time="2025-02-13T20:05:51.793615895Z" level=info msg="shim disconnected" id=7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6 namespace=k8s.io Feb 13 20:05:51.793748 containerd[1476]: time="2025-02-13T20:05:51.793691066Z" level=warning msg="cleaning up after shim disconnected" id=7ebf815ac089bd0474275b853634b0dbcd8014af758761f6b51d2a3b85c6d9a6 namespace=k8s.io Feb 13 20:05:51.793748 containerd[1476]: time="2025-02-13T20:05:51.793703641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:05:51.812625 systemd[1]: Created slice kubepods-burstable-podcd3ea36b_ba97_45d6_8dea_55fb8150ac30.slice - libcontainer container kubepods-burstable-podcd3ea36b_ba97_45d6_8dea_55fb8150ac30.slice. Feb 13 20:05:51.821744 systemd[1]: Created slice kubepods-besteffort-pod3abe8309_3cc9_4bb3_b3b5_ac30285bef0e.slice - libcontainer container kubepods-besteffort-pod3abe8309_3cc9_4bb3_b3b5_ac30285bef0e.slice. Feb 13 20:05:51.828093 systemd[1]: Created slice kubepods-burstable-pod8fdc04b2_cc59_4f83_90f5_3dfdfbb973a6.slice - libcontainer container kubepods-burstable-pod8fdc04b2_cc59_4f83_90f5_3dfdfbb973a6.slice. Feb 13 20:05:51.834259 systemd[1]: Created slice kubepods-besteffort-podc6f20208_7647_41bb_a81f_be6437dee785.slice - libcontainer container kubepods-besteffort-podc6f20208_7647_41bb_a81f_be6437dee785.slice. Feb 13 20:05:51.838642 systemd[1]: Created slice kubepods-besteffort-podbdc93042_efe0_448d_ba9e_d249c9f9fc78.slice - libcontainer container kubepods-besteffort-podbdc93042_efe0_448d_ba9e_d249c9f9fc78.slice. Feb 13 20:05:51.958217 kubelet[2525]: I0213 20:05:51.958085 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztmwh\" (UniqueName: \"kubernetes.io/projected/bdc93042-efe0-448d-ba9e-d249c9f9fc78-kube-api-access-ztmwh\") pod \"calico-kube-controllers-765dc7d966-wsz49\" (UID: \"bdc93042-efe0-448d-ba9e-d249c9f9fc78\") " pod="calico-system/calico-kube-controllers-765dc7d966-wsz49" Feb 13 20:05:51.958217 kubelet[2525]: I0213 20:05:51.958135 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db7kq\" (UniqueName: \"kubernetes.io/projected/c6f20208-7647-41bb-a81f-be6437dee785-kube-api-access-db7kq\") pod \"calico-apiserver-b6c4c9887-nqgk9\" (UID: \"c6f20208-7647-41bb-a81f-be6437dee785\") " pod="calico-apiserver/calico-apiserver-b6c4c9887-nqgk9" Feb 13 20:05:51.958217 kubelet[2525]: I0213 20:05:51.958154 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc93042-efe0-448d-ba9e-d249c9f9fc78-tigera-ca-bundle\") pod \"calico-kube-controllers-765dc7d966-wsz49\" (UID: \"bdc93042-efe0-448d-ba9e-d249c9f9fc78\") " pod="calico-system/calico-kube-controllers-765dc7d966-wsz49" Feb 13 20:05:51.958217 kubelet[2525]: I0213 20:05:51.958171 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3abe8309-3cc9-4bb3-b3b5-ac30285bef0e-calico-apiserver-certs\") pod \"calico-apiserver-b6c4c9887-84rgs\" (UID: \"3abe8309-3cc9-4bb3-b3b5-ac30285bef0e\") " pod="calico-apiserver/calico-apiserver-b6c4c9887-84rgs" Feb 13 20:05:51.958217 kubelet[2525]: I0213 20:05:51.958192 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7f7k\" (UniqueName: \"kubernetes.io/projected/cd3ea36b-ba97-45d6-8dea-55fb8150ac30-kube-api-access-b7f7k\") pod \"coredns-6f6b679f8f-plm2t\" (UID: \"cd3ea36b-ba97-45d6-8dea-55fb8150ac30\") " pod="kube-system/coredns-6f6b679f8f-plm2t" Feb 13 20:05:51.958442 kubelet[2525]: I0213 20:05:51.958209 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jg4m\" (UniqueName: \"kubernetes.io/projected/8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6-kube-api-access-7jg4m\") pod \"coredns-6f6b679f8f-74cbt\" (UID: \"8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6\") " pod="kube-system/coredns-6f6b679f8f-74cbt" Feb 13 20:05:51.958442 kubelet[2525]: I0213 20:05:51.958262 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kd4\" (UniqueName: \"kubernetes.io/projected/3abe8309-3cc9-4bb3-b3b5-ac30285bef0e-kube-api-access-n9kd4\") pod \"calico-apiserver-b6c4c9887-84rgs\" (UID: \"3abe8309-3cc9-4bb3-b3b5-ac30285bef0e\") " pod="calico-apiserver/calico-apiserver-b6c4c9887-84rgs" Feb 13 20:05:51.958442 kubelet[2525]: I0213 20:05:51.958313 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6-config-volume\") pod \"coredns-6f6b679f8f-74cbt\" (UID: \"8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6\") " pod="kube-system/coredns-6f6b679f8f-74cbt" Feb 13 20:05:51.958442 kubelet[2525]: I0213 20:05:51.958376 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c6f20208-7647-41bb-a81f-be6437dee785-calico-apiserver-certs\") pod \"calico-apiserver-b6c4c9887-nqgk9\" (UID: \"c6f20208-7647-41bb-a81f-be6437dee785\") " pod="calico-apiserver/calico-apiserver-b6c4c9887-nqgk9" Feb 13 20:05:51.958442 kubelet[2525]: I0213 20:05:51.958402 2525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd3ea36b-ba97-45d6-8dea-55fb8150ac30-config-volume\") pod \"coredns-6f6b679f8f-plm2t\" (UID: \"cd3ea36b-ba97-45d6-8dea-55fb8150ac30\") " pod="kube-system/coredns-6f6b679f8f-plm2t" Feb 13 20:05:52.124255 kubelet[2525]: E0213 20:05:52.124213 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:52.126549 containerd[1476]: time="2025-02-13T20:05:52.126423101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6c4c9887-84rgs,Uid:3abe8309-3cc9-4bb3-b3b5-ac30285bef0e,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:05:52.126549 containerd[1476]: time="2025-02-13T20:05:52.126539075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-plm2t,Uid:cd3ea36b-ba97-45d6-8dea-55fb8150ac30,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:52.131138 kubelet[2525]: E0213 20:05:52.131121 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:52.131482 containerd[1476]: time="2025-02-13T20:05:52.131452189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-74cbt,Uid:8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6,Namespace:kube-system,Attempt:0,}" Feb 13 20:05:52.137525 containerd[1476]: time="2025-02-13T20:05:52.137484510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6c4c9887-nqgk9,Uid:c6f20208-7647-41bb-a81f-be6437dee785,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:05:52.140898 containerd[1476]: time="2025-02-13T20:05:52.140865738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-765dc7d966-wsz49,Uid:bdc93042-efe0-448d-ba9e-d249c9f9fc78,Namespace:calico-system,Attempt:0,}" Feb 13 20:05:52.265344 containerd[1476]: time="2025-02-13T20:05:52.265130260Z" level=error msg="Failed to destroy network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.265534 containerd[1476]: time="2025-02-13T20:05:52.265321154Z" level=error msg="Failed to destroy network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.266884 containerd[1476]: time="2025-02-13T20:05:52.266446293Z" level=error msg="Failed to destroy network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.270772 containerd[1476]: time="2025-02-13T20:05:52.267475178Z" level=error msg="Failed to destroy network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.270930 kubelet[2525]: E0213 20:05:52.269724 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:52.271293 containerd[1476]: time="2025-02-13T20:05:52.271179746Z" level=error msg="encountered an error cleaning up failed sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.271493 containerd[1476]: time="2025-02-13T20:05:52.271393956Z" level=error msg="encountered an error cleaning up failed sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.271710 containerd[1476]: time="2025-02-13T20:05:52.271659560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-plm2t,Uid:cd3ea36b-ba97-45d6-8dea-55fb8150ac30,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.272032 containerd[1476]: time="2025-02-13T20:05:52.271990955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-74cbt,Uid:8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.272375 containerd[1476]: time="2025-02-13T20:05:52.272235616Z" level=error msg="encountered an error cleaning up failed sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.272375 containerd[1476]: time="2025-02-13T20:05:52.272279604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6c4c9887-nqgk9,Uid:c6f20208-7647-41bb-a81f-be6437dee785,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.273896 containerd[1476]: time="2025-02-13T20:05:52.273752241Z" level=error msg="encountered an error cleaning up failed sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.273896 containerd[1476]: time="2025-02-13T20:05:52.273809656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6c4c9887-84rgs,Uid:3abe8309-3cc9-4bb3-b3b5-ac30285bef0e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.274620 containerd[1476]: time="2025-02-13T20:05:52.274437066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:05:52.279581 containerd[1476]: time="2025-02-13T20:05:52.279538327Z" level=error msg="Failed to destroy network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.279989 containerd[1476]: time="2025-02-13T20:05:52.279953160Z" level=error msg="encountered an error cleaning up failed sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.280041 containerd[1476]: time="2025-02-13T20:05:52.280000866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-765dc7d966-wsz49,Uid:bdc93042-efe0-448d-ba9e-d249c9f9fc78,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.282133 kubelet[2525]: E0213 20:05:52.282095 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.282196 kubelet[2525]: E0213 20:05:52.282134 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.282196 kubelet[2525]: E0213 20:05:52.282101 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.282196 kubelet[2525]: E0213 20:05:52.282162 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-765dc7d966-wsz49" Feb 13 20:05:52.282196 kubelet[2525]: E0213 20:05:52.282183 2525 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-765dc7d966-wsz49" Feb 13 20:05:52.282309 kubelet[2525]: E0213 20:05:52.282189 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6c4c9887-nqgk9" Feb 13 20:05:52.282309 kubelet[2525]: E0213 20:05:52.282208 2525 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6c4c9887-nqgk9" Feb 13 20:05:52.282309 kubelet[2525]: E0213 20:05:52.282215 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.282309 kubelet[2525]: E0213 20:05:52.282231 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6c4c9887-84rgs" Feb 13 20:05:52.282413 kubelet[2525]: E0213 20:05:52.282221 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-765dc7d966-wsz49_calico-system(bdc93042-efe0-448d-ba9e-d249c9f9fc78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-765dc7d966-wsz49_calico-system(bdc93042-efe0-448d-ba9e-d249c9f9fc78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-765dc7d966-wsz49" podUID="bdc93042-efe0-448d-ba9e-d249c9f9fc78" Feb 13 20:05:52.282413 kubelet[2525]: E0213 20:05:52.282251 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6c4c9887-nqgk9_calico-apiserver(c6f20208-7647-41bb-a81f-be6437dee785)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6c4c9887-nqgk9_calico-apiserver(c6f20208-7647-41bb-a81f-be6437dee785)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6c4c9887-nqgk9" podUID="c6f20208-7647-41bb-a81f-be6437dee785" Feb 13 20:05:52.282520 kubelet[2525]: E0213 20:05:52.282189 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-74cbt" Feb 13 20:05:52.282520 kubelet[2525]: E0213 20:05:52.282101 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:52.282520 kubelet[2525]: E0213 20:05:52.282287 2525 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-74cbt" Feb 13 20:05:52.282592 kubelet[2525]: E0213 20:05:52.282330 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-74cbt_kube-system(8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-74cbt_kube-system(8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-74cbt" podUID="8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6" Feb 13 20:05:52.282592 kubelet[2525]: E0213 20:05:52.282249 2525 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6c4c9887-84rgs" Feb 13 20:05:52.282592 kubelet[2525]: E0213 20:05:52.282292 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-plm2t" Feb 13 20:05:52.282693 kubelet[2525]: E0213 20:05:52.282363 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6c4c9887-84rgs_calico-apiserver(3abe8309-3cc9-4bb3-b3b5-ac30285bef0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6c4c9887-84rgs_calico-apiserver(3abe8309-3cc9-4bb3-b3b5-ac30285bef0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6c4c9887-84rgs" podUID="3abe8309-3cc9-4bb3-b3b5-ac30285bef0e" Feb 13 20:05:52.282693 kubelet[2525]: E0213 20:05:52.282367 2525 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-plm2t" Feb 13 20:05:52.282693 kubelet[2525]: E0213 20:05:52.282404 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-plm2t_kube-system(cd3ea36b-ba97-45d6-8dea-55fb8150ac30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-plm2t_kube-system(cd3ea36b-ba97-45d6-8dea-55fb8150ac30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-plm2t" podUID="cd3ea36b-ba97-45d6-8dea-55fb8150ac30" Feb 13 20:05:53.032165 systemd[1]: Created slice kubepods-besteffort-podc798cf42_a2d5_48e9_9db3_eab6f1d0ef23.slice - libcontainer container kubepods-besteffort-podc798cf42_a2d5_48e9_9db3_eab6f1d0ef23.slice. Feb 13 20:05:53.034303 containerd[1476]: time="2025-02-13T20:05:53.034266662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rlkfd,Uid:c798cf42-a2d5-48e9-9db3-eab6f1d0ef23,Namespace:calico-system,Attempt:0,}" Feb 13 20:05:53.092131 containerd[1476]: time="2025-02-13T20:05:53.092081930Z" level=error msg="Failed to destroy network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.092504 containerd[1476]: time="2025-02-13T20:05:53.092455928Z" level=error msg="encountered an error cleaning up failed sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.092546 containerd[1476]: time="2025-02-13T20:05:53.092523163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rlkfd,Uid:c798cf42-a2d5-48e9-9db3-eab6f1d0ef23,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.092997 kubelet[2525]: E0213 20:05:53.092700 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.092997 kubelet[2525]: E0213 20:05:53.092749 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rlkfd" Feb 13 20:05:53.092997 kubelet[2525]: E0213 20:05:53.092766 2525 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rlkfd" Feb 13 20:05:53.093135 kubelet[2525]: E0213 20:05:53.092826 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rlkfd_calico-system(c798cf42-a2d5-48e9-9db3-eab6f1d0ef23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rlkfd_calico-system(c798cf42-a2d5-48e9-9db3-eab6f1d0ef23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:53.094367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be-shm.mount: Deactivated successfully. Feb 13 20:05:53.251670 kubelet[2525]: I0213 20:05:53.251637 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:05:53.252783 kubelet[2525]: I0213 20:05:53.252731 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:05:53.254405 kubelet[2525]: I0213 20:05:53.254381 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:05:53.255118 containerd[1476]: time="2025-02-13T20:05:53.255072570Z" level=info msg="StopPodSandbox for \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\"" Feb 13 20:05:53.255196 containerd[1476]: time="2025-02-13T20:05:53.255156718Z" level=info msg="StopPodSandbox for \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\"" Feb 13 20:05:53.255609 kubelet[2525]: I0213 20:05:53.255580 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:05:53.256018 containerd[1476]: time="2025-02-13T20:05:53.255996251Z" level=info msg="StopPodSandbox for \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\"" Feb 13 20:05:53.256507 containerd[1476]: time="2025-02-13T20:05:53.256458807Z" level=info msg="StopPodSandbox for \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\"" Feb 13 20:05:53.257659 containerd[1476]: time="2025-02-13T20:05:53.257552147Z" level=info msg="Ensure that sandbox d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c in task-service has been cleanup successfully" Feb 13 20:05:53.257659 containerd[1476]: time="2025-02-13T20:05:53.257567237Z" level=info msg="Ensure that sandbox 21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800 in task-service has been cleanup successfully" Feb 13 20:05:53.257764 containerd[1476]: time="2025-02-13T20:05:53.257557558Z" level=info msg="Ensure that sandbox cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea in task-service has been cleanup successfully" Feb 13 20:05:53.262250 kubelet[2525]: I0213 20:05:53.261882 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:05:53.263020 containerd[1476]: time="2025-02-13T20:05:53.262999389Z" level=info msg="StopPodSandbox for \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\"" Feb 13 20:05:53.263215 containerd[1476]: time="2025-02-13T20:05:53.263198327Z" level=info msg="Ensure that sandbox 8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7 in task-service has been cleanup successfully" Feb 13 20:05:53.268604 kubelet[2525]: I0213 20:05:53.268566 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:05:53.270335 containerd[1476]: time="2025-02-13T20:05:53.269479120Z" level=info msg="StopPodSandbox for \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\"" Feb 13 20:05:53.270335 containerd[1476]: time="2025-02-13T20:05:53.269660553Z" level=info msg="Ensure that sandbox 20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be in task-service has been cleanup successfully" Feb 13 20:05:53.279272 containerd[1476]: time="2025-02-13T20:05:53.279221947Z" level=info msg="Ensure that sandbox 1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc in task-service has been cleanup successfully" Feb 13 20:05:53.307458 containerd[1476]: time="2025-02-13T20:05:53.307273441Z" level=error msg="StopPodSandbox for \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\" failed" error="failed to destroy network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.308000 kubelet[2525]: E0213 20:05:53.307695 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:05:53.308000 kubelet[2525]: E0213 20:05:53.307753 2525 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800"} Feb 13 20:05:53.308000 kubelet[2525]: E0213 20:05:53.307835 2525 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6f20208-7647-41bb-a81f-be6437dee785\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:05:53.308000 kubelet[2525]: E0213 20:05:53.307859 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6f20208-7647-41bb-a81f-be6437dee785\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6c4c9887-nqgk9" podUID="c6f20208-7647-41bb-a81f-be6437dee785" Feb 13 20:05:53.309174 containerd[1476]: time="2025-02-13T20:05:53.309120711Z" level=error msg="StopPodSandbox for \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\" failed" error="failed to destroy network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.310108 kubelet[2525]: E0213 20:05:53.310076 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:05:53.310166 kubelet[2525]: E0213 20:05:53.310110 2525 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea"} Feb 13 20:05:53.310166 kubelet[2525]: E0213 20:05:53.310131 2525 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd3ea36b-ba97-45d6-8dea-55fb8150ac30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:05:53.310166 kubelet[2525]: E0213 20:05:53.310148 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd3ea36b-ba97-45d6-8dea-55fb8150ac30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-plm2t" podUID="cd3ea36b-ba97-45d6-8dea-55fb8150ac30" Feb 13 20:05:53.314631 containerd[1476]: time="2025-02-13T20:05:53.314583555Z" level=error msg="StopPodSandbox for \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\" failed" error="failed to destroy network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.315028 kubelet[2525]: E0213 20:05:53.315002 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:05:53.315088 kubelet[2525]: E0213 20:05:53.315029 2525 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be"} Feb 13 20:05:53.315088 kubelet[2525]: E0213 20:05:53.315053 2525 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:05:53.315088 kubelet[2525]: E0213 20:05:53.315069 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rlkfd" podUID="c798cf42-a2d5-48e9-9db3-eab6f1d0ef23" Feb 13 20:05:53.316230 containerd[1476]: time="2025-02-13T20:05:53.316168209Z" level=error msg="StopPodSandbox for \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\" failed" error="failed to destroy network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.316811 kubelet[2525]: E0213 20:05:53.316392 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:05:53.316811 kubelet[2525]: E0213 20:05:53.316450 2525 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7"} Feb 13 20:05:53.316811 kubelet[2525]: E0213 20:05:53.316495 2525 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3abe8309-3cc9-4bb3-b3b5-ac30285bef0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:05:53.316811 kubelet[2525]: E0213 20:05:53.316519 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3abe8309-3cc9-4bb3-b3b5-ac30285bef0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6c4c9887-84rgs" podUID="3abe8309-3cc9-4bb3-b3b5-ac30285bef0e" Feb 13 20:05:53.319501 containerd[1476]: time="2025-02-13T20:05:53.319452719Z" level=error msg="StopPodSandbox for \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\" failed" error="failed to destroy network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.319608 kubelet[2525]: E0213 20:05:53.319589 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:05:53.319608 kubelet[2525]: E0213 20:05:53.319611 2525 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c"} Feb 13 20:05:53.319683 kubelet[2525]: E0213 20:05:53.319632 2525 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdc93042-efe0-448d-ba9e-d249c9f9fc78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:05:53.319683 kubelet[2525]: E0213 20:05:53.319651 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdc93042-efe0-448d-ba9e-d249c9f9fc78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-765dc7d966-wsz49" podUID="bdc93042-efe0-448d-ba9e-d249c9f9fc78" Feb 13 20:05:53.321007 containerd[1476]: time="2025-02-13T20:05:53.320966161Z" level=error msg="StopPodSandbox for \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\" failed" error="failed to destroy network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:05:53.321107 kubelet[2525]: E0213 20:05:53.321082 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:05:53.321154 kubelet[2525]: E0213 20:05:53.321108 2525 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc"} Feb 13 20:05:53.321154 kubelet[2525]: E0213 20:05:53.321128 2525 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:05:53.321154 kubelet[2525]: E0213 20:05:53.321147 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-74cbt" podUID="8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6" Feb 13 20:05:57.161116 systemd[1]: Started sshd@9-10.0.0.159:22-10.0.0.1:48308.service - OpenSSH per-connection server daemon (10.0.0.1:48308). Feb 13 20:05:57.231238 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 48308 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:05:57.233293 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:57.238296 systemd-logind[1462]: New session 10 of user core. Feb 13 20:05:57.247978 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:05:57.374215 sshd[3714]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:57.381598 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:05:57.382029 systemd[1]: sshd@9-10.0.0.159:22-10.0.0.1:48308.service: Deactivated successfully. Feb 13 20:05:57.384500 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:05:57.385845 systemd-logind[1462]: Removed session 10. Feb 13 20:05:59.125197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064493774.mount: Deactivated successfully. Feb 13 20:05:59.861857 containerd[1476]: time="2025-02-13T20:05:59.858933448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:05:59.863346 containerd[1476]: time="2025-02-13T20:05:59.858010580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:59.865132 containerd[1476]: time="2025-02-13T20:05:59.865105681Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:59.867083 containerd[1476]: time="2025-02-13T20:05:59.867053772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:05:59.867642 containerd[1476]: time="2025-02-13T20:05:59.867609705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.593132999s" Feb 13 20:05:59.867678 containerd[1476]: time="2025-02-13T20:05:59.867639774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:05:59.879024 containerd[1476]: time="2025-02-13T20:05:59.878981381Z" level=info msg="CreateContainer within sandbox \"46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:05:59.943598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569159271.mount: Deactivated successfully. Feb 13 20:05:59.946343 containerd[1476]: time="2025-02-13T20:05:59.946299330Z" level=info msg="CreateContainer within sandbox \"46d28d2fa70e7bd7f9f94f6e807ba81a49a14370b5083020f3c6adf534a219e0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5b99fb3ddd7b14c7340945266ad1f385cce2018bbb6658ef9ea576ddc2b31209\"" Feb 13 20:05:59.946912 containerd[1476]: time="2025-02-13T20:05:59.946876934Z" level=info msg="StartContainer for \"5b99fb3ddd7b14c7340945266ad1f385cce2018bbb6658ef9ea576ddc2b31209\"" Feb 13 20:06:00.014946 systemd[1]: Started cri-containerd-5b99fb3ddd7b14c7340945266ad1f385cce2018bbb6658ef9ea576ddc2b31209.scope - libcontainer container 5b99fb3ddd7b14c7340945266ad1f385cce2018bbb6658ef9ea576ddc2b31209. Feb 13 20:06:00.468365 containerd[1476]: time="2025-02-13T20:06:00.468271623Z" level=info msg="StartContainer for \"5b99fb3ddd7b14c7340945266ad1f385cce2018bbb6658ef9ea576ddc2b31209\" returns successfully" Feb 13 20:06:00.496623 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:06:00.496735 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:06:01.475086 kubelet[2525]: E0213 20:06:01.475019 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:01.490393 kubelet[2525]: I0213 20:06:01.490231 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6tvvn" podStartSLOduration=2.024131944 podStartE2EDuration="22.490211292s" podCreationTimestamp="2025-02-13 20:05:39 +0000 UTC" firstStartedPulling="2025-02-13 20:05:39.402210905 +0000 UTC m=+12.461485049" lastFinishedPulling="2025-02-13 20:05:59.868290243 +0000 UTC m=+32.927564397" observedRunningTime="2025-02-13 20:06:01.489352926 +0000 UTC m=+34.548627100" watchObservedRunningTime="2025-02-13 20:06:01.490211292 +0000 UTC m=+34.549485456" Feb 13 20:06:01.508723 systemd[1]: run-containerd-runc-k8s.io-5b99fb3ddd7b14c7340945266ad1f385cce2018bbb6658ef9ea576ddc2b31209-runc.MIRKEh.mount: Deactivated successfully. Feb 13 20:06:01.983822 kernel: bpftool[3952]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:06:02.203857 systemd-networkd[1407]: vxlan.calico: Link UP Feb 13 20:06:02.203867 systemd-networkd[1407]: vxlan.calico: Gained carrier Feb 13 20:06:02.389769 systemd[1]: Started sshd@10-10.0.0.159:22-10.0.0.1:48312.service - OpenSSH per-connection server daemon (10.0.0.1:48312). Feb 13 20:06:02.444063 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 48312 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:02.446294 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:02.450986 systemd-logind[1462]: New session 11 of user core. Feb 13 20:06:02.456958 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:06:02.477723 kubelet[2525]: E0213 20:06:02.477653 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:02.498826 systemd[1]: run-containerd-runc-k8s.io-5b99fb3ddd7b14c7340945266ad1f385cce2018bbb6658ef9ea576ddc2b31209-runc.v5fSyn.mount: Deactivated successfully. Feb 13 20:06:02.585915 sshd[3995]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:02.589702 systemd[1]: sshd@10-10.0.0.159:22-10.0.0.1:48312.service: Deactivated successfully. Feb 13 20:06:02.591776 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:06:02.592388 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:06:02.593210 systemd-logind[1462]: Removed session 11. Feb 13 20:06:04.023006 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL Feb 13 20:06:05.027784 containerd[1476]: time="2025-02-13T20:06:05.027429663Z" level=info msg="StopPodSandbox for \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\"" Feb 13 20:06:05.028763 containerd[1476]: time="2025-02-13T20:06:05.028508295Z" level=info msg="StopPodSandbox for \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\"" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4100] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4100] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" iface="eth0" netns="/var/run/netns/cni-f565358f-5c0c-aa69-bdd0-585b720de384" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4100] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" iface="eth0" netns="/var/run/netns/cni-f565358f-5c0c-aa69-bdd0-585b720de384" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4100] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" iface="eth0" netns="/var/run/netns/cni-f565358f-5c0c-aa69-bdd0-585b720de384" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4100] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.139 [INFO][4114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.139 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.139 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.145 [WARNING][4114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.145 [INFO][4114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.146 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:05.151626 containerd[1476]: 2025-02-13 20:06:05.149 [INFO][4100] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:05.153405 containerd[1476]: time="2025-02-13T20:06:05.151870692Z" level=info msg="TearDown network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\" successfully" Feb 13 20:06:05.153405 containerd[1476]: time="2025-02-13T20:06:05.151895139Z" level=info msg="StopPodSandbox for \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\" returns successfully" Feb 13 20:06:05.153405 containerd[1476]: time="2025-02-13T20:06:05.152605608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rlkfd,Uid:c798cf42-a2d5-48e9-9db3-eab6f1d0ef23,Namespace:calico-system,Attempt:1,}" Feb 13 20:06:05.154353 systemd[1]: run-netns-cni\x2df565358f\x2d5c0c\x2daa69\x2dbdd0\x2d585b720de384.mount: Deactivated successfully. Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.077 [INFO][4101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.077 [INFO][4101] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" iface="eth0" netns="/var/run/netns/cni-dda22f82-b9f2-bb7f-5f6f-7b9ecf08750f" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.078 [INFO][4101] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" iface="eth0" netns="/var/run/netns/cni-dda22f82-b9f2-bb7f-5f6f-7b9ecf08750f" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4101] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" iface="eth0" netns="/var/run/netns/cni-dda22f82-b9f2-bb7f-5f6f-7b9ecf08750f" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.080 [INFO][4101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.139 [INFO][4115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.139 [INFO][4115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.146 [INFO][4115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.151 [WARNING][4115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.151 [INFO][4115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.153 [INFO][4115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:05.158934 containerd[1476]: 2025-02-13 20:06:05.156 [INFO][4101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:05.159245 containerd[1476]: time="2025-02-13T20:06:05.159049265Z" level=info msg="TearDown network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\" successfully" Feb 13 20:06:05.159245 containerd[1476]: time="2025-02-13T20:06:05.159065878Z" level=info msg="StopPodSandbox for \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\" returns successfully" Feb 13 20:06:05.159539 containerd[1476]: time="2025-02-13T20:06:05.159518067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6c4c9887-nqgk9,Uid:c6f20208-7647-41bb-a81f-be6437dee785,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:06:05.161937 systemd[1]: run-netns-cni\x2ddda22f82\x2db9f2\x2dbb7f\x2d5f6f\x2d7b9ecf08750f.mount: Deactivated successfully. Feb 13 20:06:05.260864 systemd-networkd[1407]: cali042991a0f4b: Link UP Feb 13 20:06:05.261082 systemd-networkd[1407]: cali042991a0f4b: Gained carrier Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.202 [INFO][4129] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rlkfd-eth0 csi-node-driver- calico-system c798cf42-a2d5-48e9-9db3-eab6f1d0ef23 822 0 2025-02-13 20:05:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rlkfd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali042991a0f4b [] []}} ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Namespace="calico-system" Pod="csi-node-driver-rlkfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rlkfd-" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.202 [INFO][4129] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Namespace="calico-system" Pod="csi-node-driver-rlkfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.229 [INFO][4155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" HandleID="k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.236 [INFO][4155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" HandleID="k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rlkfd", "timestamp":"2025-02-13 20:06:05.229091939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.236 [INFO][4155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.236 [INFO][4155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.236 [INFO][4155] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.238 [INFO][4155] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.242 [INFO][4155] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.245 [INFO][4155] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.246 [INFO][4155] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.248 [INFO][4155] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.248 [INFO][4155] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.249 [INFO][4155] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.252 [INFO][4155] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.255 [INFO][4155] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.255 [INFO][4155] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" host="localhost" Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.255 [INFO][4155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:05.274220 containerd[1476]: 2025-02-13 20:06:05.255 [INFO][4155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" HandleID="k8s-pod-network.e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.274990 containerd[1476]: 2025-02-13 20:06:05.258 [INFO][4129] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Namespace="calico-system" Pod="csi-node-driver-rlkfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rlkfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rlkfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rlkfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali042991a0f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:05.274990 containerd[1476]: 2025-02-13 20:06:05.258 [INFO][4129] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Namespace="calico-system" Pod="csi-node-driver-rlkfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.274990 containerd[1476]: 2025-02-13 20:06:05.258 [INFO][4129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali042991a0f4b ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Namespace="calico-system" Pod="csi-node-driver-rlkfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.274990 containerd[1476]: 2025-02-13 20:06:05.261 [INFO][4129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Namespace="calico-system" Pod="csi-node-driver-rlkfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.274990 containerd[1476]: 2025-02-13 20:06:05.262 [INFO][4129] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Namespace="calico-system" Pod="csi-node-driver-rlkfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rlkfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rlkfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede", Pod:"csi-node-driver-rlkfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali042991a0f4b", MAC:"36:22:78:46:b0:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:05.274990 containerd[1476]: 2025-02-13 20:06:05.272 [INFO][4129] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede" Namespace="calico-system" Pod="csi-node-driver-rlkfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:05.301042 containerd[1476]: time="2025-02-13T20:06:05.300913281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:06:05.301042 containerd[1476]: time="2025-02-13T20:06:05.300970975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:06:05.301042 containerd[1476]: time="2025-02-13T20:06:05.300984191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:05.301167 containerd[1476]: time="2025-02-13T20:06:05.301070791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:05.318925 systemd[1]: Started cri-containerd-e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede.scope - libcontainer container e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede. Feb 13 20:06:05.328971 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:06:05.338491 containerd[1476]: time="2025-02-13T20:06:05.338449822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rlkfd,Uid:c798cf42-a2d5-48e9-9db3-eab6f1d0ef23,Namespace:calico-system,Attempt:1,} returns sandbox id \"e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede\"" Feb 13 20:06:05.340346 containerd[1476]: time="2025-02-13T20:06:05.340317135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:06:05.364252 systemd-networkd[1407]: cali27e0704bf72: Link UP Feb 13 20:06:05.364530 systemd-networkd[1407]: cali27e0704bf72: Gained carrier Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.207 [INFO][4139] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0 calico-apiserver-b6c4c9887- calico-apiserver c6f20208-7647-41bb-a81f-be6437dee785 820 0 2025-02-13 20:05:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b6c4c9887 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b6c4c9887-nqgk9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali27e0704bf72 [] []}} ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-nqgk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.207 [INFO][4139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-nqgk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.231 [INFO][4161] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" HandleID="k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.237 [INFO][4161] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" HandleID="k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b6c4c9887-nqgk9", "timestamp":"2025-02-13 20:06:05.231292728 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.237 [INFO][4161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.255 [INFO][4161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.255 [INFO][4161] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.339 [INFO][4161] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.344 [INFO][4161] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.347 [INFO][4161] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.348 [INFO][4161] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.350 [INFO][4161] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.350 [INFO][4161] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.351 [INFO][4161] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.354 [INFO][4161] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.358 [INFO][4161] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.358 [INFO][4161] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" host="localhost" Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.358 [INFO][4161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:05.375639 containerd[1476]: 2025-02-13 20:06:05.358 [INFO][4161] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" HandleID="k8s-pod-network.f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.377109 containerd[1476]: 2025-02-13 20:06:05.362 [INFO][4139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-nqgk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0", GenerateName:"calico-apiserver-b6c4c9887-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6f20208-7647-41bb-a81f-be6437dee785", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6c4c9887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b6c4c9887-nqgk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27e0704bf72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:05.377109 containerd[1476]: 2025-02-13 20:06:05.362 [INFO][4139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-nqgk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.377109 containerd[1476]: 2025-02-13 20:06:05.362 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27e0704bf72 ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-nqgk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.377109 containerd[1476]: 2025-02-13 20:06:05.364 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-nqgk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.377109 containerd[1476]: 2025-02-13 20:06:05.365 [INFO][4139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-nqgk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0", GenerateName:"calico-apiserver-b6c4c9887-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6f20208-7647-41bb-a81f-be6437dee785", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6c4c9887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae", Pod:"calico-apiserver-b6c4c9887-nqgk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27e0704bf72", MAC:"46:9e:48:16:ed:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:05.377109 containerd[1476]: 2025-02-13 20:06:05.373 [INFO][4139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-nqgk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:05.398027 containerd[1476]: time="2025-02-13T20:06:05.397196932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:06:05.398192 containerd[1476]: time="2025-02-13T20:06:05.398119137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:06:05.398192 containerd[1476]: time="2025-02-13T20:06:05.398142202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:05.398402 containerd[1476]: time="2025-02-13T20:06:05.398356042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:05.420914 systemd[1]: Started cri-containerd-f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae.scope - libcontainer container f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae. Feb 13 20:06:05.432171 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:06:05.458366 containerd[1476]: time="2025-02-13T20:06:05.458315027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6c4c9887-nqgk9,Uid:c6f20208-7647-41bb-a81f-be6437dee785,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae\"" Feb 13 20:06:06.027496 containerd[1476]: time="2025-02-13T20:06:06.027453220Z" level=info msg="StopPodSandbox for \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\"" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.068 [INFO][4297] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.068 [INFO][4297] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" iface="eth0" netns="/var/run/netns/cni-da211f1b-77b7-a95d-f9fb-06e5bbf8fa48" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.068 [INFO][4297] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" iface="eth0" netns="/var/run/netns/cni-da211f1b-77b7-a95d-f9fb-06e5bbf8fa48" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.068 [INFO][4297] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" iface="eth0" netns="/var/run/netns/cni-da211f1b-77b7-a95d-f9fb-06e5bbf8fa48" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.068 [INFO][4297] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.068 [INFO][4297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.089 [INFO][4304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.089 [INFO][4304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.089 [INFO][4304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.094 [WARNING][4304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.094 [INFO][4304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.095 [INFO][4304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:06.100573 containerd[1476]: 2025-02-13 20:06:06.098 [INFO][4297] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:06.101276 containerd[1476]: time="2025-02-13T20:06:06.100717752Z" level=info msg="TearDown network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\" successfully" Feb 13 20:06:06.101276 containerd[1476]: time="2025-02-13T20:06:06.100743042Z" level=info msg="StopPodSandbox for \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\" returns successfully" Feb 13 20:06:06.101326 kubelet[2525]: E0213 20:06:06.101128 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:06.101710 containerd[1476]: time="2025-02-13T20:06:06.101660556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-plm2t,Uid:cd3ea36b-ba97-45d6-8dea-55fb8150ac30,Namespace:kube-system,Attempt:1,}" Feb 13 20:06:06.157308 systemd[1]: run-netns-cni\x2dda211f1b\x2d77b7\x2da95d\x2df9fb\x2d06e5bbf8fa48.mount: Deactivated successfully. Feb 13 20:06:06.205421 systemd-networkd[1407]: calidedc00032b6: Link UP Feb 13 20:06:06.205923 systemd-networkd[1407]: calidedc00032b6: Gained carrier Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.143 [INFO][4314] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--plm2t-eth0 coredns-6f6b679f8f- kube-system cd3ea36b-ba97-45d6-8dea-55fb8150ac30 846 0 2025-02-13 20:05:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-plm2t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidedc00032b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Namespace="kube-system" Pod="coredns-6f6b679f8f-plm2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--plm2t-" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.143 [INFO][4314] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Namespace="kube-system" Pod="coredns-6f6b679f8f-plm2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.170 [INFO][4327] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" HandleID="k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.177 [INFO][4327] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" HandleID="k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ac820), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-plm2t", "timestamp":"2025-02-13 20:06:06.170538536 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.177 [INFO][4327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.177 [INFO][4327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.177 [INFO][4327] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.179 [INFO][4327] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.183 [INFO][4327] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.187 [INFO][4327] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.188 [INFO][4327] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.190 [INFO][4327] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.190 [INFO][4327] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.191 [INFO][4327] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205 Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.195 [INFO][4327] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.200 [INFO][4327] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.200 [INFO][4327] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" host="localhost" Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.200 [INFO][4327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:06.216314 containerd[1476]: 2025-02-13 20:06:06.200 [INFO][4327] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" HandleID="k8s-pod-network.73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.216822 containerd[1476]: 2025-02-13 20:06:06.203 [INFO][4314] cni-plugin/k8s.go 386: Populated endpoint ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Namespace="kube-system" Pod="coredns-6f6b679f8f-plm2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--plm2t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cd3ea36b-ba97-45d6-8dea-55fb8150ac30", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-plm2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidedc00032b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:06.216822 containerd[1476]: 2025-02-13 20:06:06.203 [INFO][4314] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Namespace="kube-system" Pod="coredns-6f6b679f8f-plm2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.216822 containerd[1476]: 2025-02-13 20:06:06.203 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidedc00032b6 ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Namespace="kube-system" Pod="coredns-6f6b679f8f-plm2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.216822 containerd[1476]: 2025-02-13 20:06:06.205 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Namespace="kube-system" Pod="coredns-6f6b679f8f-plm2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.216822 containerd[1476]: 2025-02-13 20:06:06.206 [INFO][4314] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Namespace="kube-system" Pod="coredns-6f6b679f8f-plm2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--plm2t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cd3ea36b-ba97-45d6-8dea-55fb8150ac30", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205", Pod:"coredns-6f6b679f8f-plm2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidedc00032b6", MAC:"1e:af:ae:8b:bb:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:06.216822 containerd[1476]: 2025-02-13 20:06:06.213 [INFO][4314] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205" Namespace="kube-system" Pod="coredns-6f6b679f8f-plm2t" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:06.234997 containerd[1476]: time="2025-02-13T20:06:06.234877254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:06:06.234997 containerd[1476]: time="2025-02-13T20:06:06.234950278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:06:06.234997 containerd[1476]: time="2025-02-13T20:06:06.234966589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:06.235194 containerd[1476]: time="2025-02-13T20:06:06.235093348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:06.262936 systemd[1]: Started cri-containerd-73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205.scope - libcontainer container 73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205. Feb 13 20:06:06.274287 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:06:06.298637 containerd[1476]: time="2025-02-13T20:06:06.298526247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-plm2t,Uid:cd3ea36b-ba97-45d6-8dea-55fb8150ac30,Namespace:kube-system,Attempt:1,} returns sandbox id \"73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205\"" Feb 13 20:06:06.299633 kubelet[2525]: E0213 20:06:06.299606 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:06.301447 containerd[1476]: time="2025-02-13T20:06:06.301404324Z" level=info msg="CreateContainer within sandbox \"73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:06:06.329848 containerd[1476]: time="2025-02-13T20:06:06.329770978Z" level=info msg="CreateContainer within sandbox \"73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"894e6329bb343f4e5766aeaf651c3a06dfd4dd245f73dc54b2f5b966160f44c2\"" Feb 13 20:06:06.330514 containerd[1476]: time="2025-02-13T20:06:06.330467748Z" level=info msg="StartContainer for \"894e6329bb343f4e5766aeaf651c3a06dfd4dd245f73dc54b2f5b966160f44c2\"" Feb 13 20:06:06.360918 systemd[1]: Started cri-containerd-894e6329bb343f4e5766aeaf651c3a06dfd4dd245f73dc54b2f5b966160f44c2.scope - libcontainer container 894e6329bb343f4e5766aeaf651c3a06dfd4dd245f73dc54b2f5b966160f44c2. Feb 13 20:06:06.389915 containerd[1476]: time="2025-02-13T20:06:06.389865826Z" level=info msg="StartContainer for \"894e6329bb343f4e5766aeaf651c3a06dfd4dd245f73dc54b2f5b966160f44c2\" returns successfully" Feb 13 20:06:06.488356 kubelet[2525]: E0213 20:06:06.488321 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:06.499460 kubelet[2525]: I0213 20:06:06.499386 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-plm2t" podStartSLOduration=33.499369926 podStartE2EDuration="33.499369926s" podCreationTimestamp="2025-02-13 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:06:06.498109078 +0000 UTC m=+39.557383232" watchObservedRunningTime="2025-02-13 20:06:06.499369926 +0000 UTC m=+39.558644080" Feb 13 20:06:06.582952 systemd-networkd[1407]: cali042991a0f4b: Gained IPv6LL Feb 13 20:06:07.028117 containerd[1476]: time="2025-02-13T20:06:07.027826283Z" level=info msg="StopPodSandbox for \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\"" Feb 13 20:06:07.028274 containerd[1476]: time="2025-02-13T20:06:07.028131613Z" level=info msg="StopPodSandbox for \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\"" Feb 13 20:06:07.156105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144503375.mount: Deactivated successfully. Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4465] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4465] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" iface="eth0" netns="/var/run/netns/cni-02886492-77be-d93d-2d0a-c654ef688a32" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4465] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" iface="eth0" netns="/var/run/netns/cni-02886492-77be-d93d-2d0a-c654ef688a32" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4465] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" iface="eth0" netns="/var/run/netns/cni-02886492-77be-d93d-2d0a-c654ef688a32" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4465] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.119 [INFO][4465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.143 [INFO][4481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.143 [INFO][4481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.143 [INFO][4481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.150 [WARNING][4481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.150 [INFO][4481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.152 [INFO][4481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:07.161457 containerd[1476]: 2025-02-13 20:06:07.158 [INFO][4465] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:07.161457 containerd[1476]: time="2025-02-13T20:06:07.161439131Z" level=info msg="TearDown network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\" successfully" Feb 13 20:06:07.162120 containerd[1476]: time="2025-02-13T20:06:07.161470272Z" level=info msg="StopPodSandbox for \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\" returns successfully" Feb 13 20:06:07.164218 containerd[1476]: time="2025-02-13T20:06:07.163546318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6c4c9887-84rgs,Uid:3abe8309-3cc9-4bb3-b3b5-ac30285bef0e,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:06:07.165026 systemd[1]: run-netns-cni\x2d02886492\x2d77be\x2dd93d\x2d2d0a\x2dc654ef688a32.mount: Deactivated successfully. Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.115 [INFO][4466] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.116 [INFO][4466] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" iface="eth0" netns="/var/run/netns/cni-e040160a-7da5-5341-7a32-9cb3afe0328c" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4466] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" iface="eth0" netns="/var/run/netns/cni-e040160a-7da5-5341-7a32-9cb3afe0328c" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4466] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" iface="eth0" netns="/var/run/netns/cni-e040160a-7da5-5341-7a32-9cb3afe0328c" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4466] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.118 [INFO][4466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.146 [INFO][4480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.146 [INFO][4480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.152 [INFO][4480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.162 [WARNING][4480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.162 [INFO][4480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.164 [INFO][4480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:07.171186 containerd[1476]: 2025-02-13 20:06:07.168 [INFO][4466] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:07.171735 containerd[1476]: time="2025-02-13T20:06:07.171487803Z" level=info msg="TearDown network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\" successfully" Feb 13 20:06:07.171735 containerd[1476]: time="2025-02-13T20:06:07.171507372Z" level=info msg="StopPodSandbox for \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\" returns successfully" Feb 13 20:06:07.172035 kubelet[2525]: E0213 20:06:07.171891 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:07.173646 containerd[1476]: time="2025-02-13T20:06:07.173158584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-74cbt,Uid:8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6,Namespace:kube-system,Attempt:1,}" Feb 13 20:06:07.174303 systemd[1]: run-netns-cni\x2de040160a\x2d7da5\x2d5341\x2d7a32\x2d9cb3afe0328c.mount: Deactivated successfully. Feb 13 20:06:07.394082 systemd-networkd[1407]: cali50e295ea44b: Link UP Feb 13 20:06:07.395199 systemd-networkd[1407]: cali50e295ea44b: Gained carrier Feb 13 20:06:07.414957 systemd-networkd[1407]: cali27e0704bf72: Gained IPv6LL Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.222 [INFO][4501] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0 calico-apiserver-b6c4c9887- calico-apiserver 3abe8309-3cc9-4bb3-b3b5-ac30285bef0e 873 0 2025-02-13 20:05:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b6c4c9887 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b6c4c9887-84rgs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali50e295ea44b [] []}} ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-84rgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.222 [INFO][4501] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-84rgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.257 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" HandleID="k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.264 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" HandleID="k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002419e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b6c4c9887-84rgs", "timestamp":"2025-02-13 20:06:07.257358248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.265 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.265 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.265 [INFO][4529] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.266 [INFO][4529] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.270 [INFO][4529] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.273 [INFO][4529] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.275 [INFO][4529] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.277 [INFO][4529] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.277 [INFO][4529] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.278 [INFO][4529] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.301 [INFO][4529] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.386 [INFO][4529] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.386 [INFO][4529] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" host="localhost" Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.387 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:07.463725 containerd[1476]: 2025-02-13 20:06:07.387 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" HandleID="k8s-pod-network.d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.464471 containerd[1476]: 2025-02-13 20:06:07.390 [INFO][4501] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-84rgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0", GenerateName:"calico-apiserver-b6c4c9887-", Namespace:"calico-apiserver", SelfLink:"", UID:"3abe8309-3cc9-4bb3-b3b5-ac30285bef0e", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6c4c9887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b6c4c9887-84rgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50e295ea44b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:07.464471 containerd[1476]: 2025-02-13 20:06:07.390 [INFO][4501] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-84rgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.464471 containerd[1476]: 2025-02-13 20:06:07.390 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50e295ea44b ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-84rgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.464471 containerd[1476]: 2025-02-13 20:06:07.394 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-84rgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.464471 containerd[1476]: 2025-02-13 20:06:07.394 [INFO][4501] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-84rgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0", GenerateName:"calico-apiserver-b6c4c9887-", Namespace:"calico-apiserver", SelfLink:"", UID:"3abe8309-3cc9-4bb3-b3b5-ac30285bef0e", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6c4c9887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd", Pod:"calico-apiserver-b6c4c9887-84rgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50e295ea44b", MAC:"f6:24:05:4e:53:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:07.464471 containerd[1476]: 2025-02-13 20:06:07.460 [INFO][4501] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd" Namespace="calico-apiserver" Pod="calico-apiserver-b6c4c9887-84rgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:07.490068 kubelet[2525]: E0213 20:06:07.490038 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:07.569751 containerd[1476]: time="2025-02-13T20:06:07.569688175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:07.570893 containerd[1476]: time="2025-02-13T20:06:07.570854224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:06:07.572374 containerd[1476]: time="2025-02-13T20:06:07.572338189Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:07.576142 containerd[1476]: time="2025-02-13T20:06:07.575906817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.235553891s" Feb 13 20:06:07.576142 containerd[1476]: time="2025-02-13T20:06:07.575946564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:06:07.576142 containerd[1476]: time="2025-02-13T20:06:07.576063535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:07.577645 systemd-networkd[1407]: caliab4d358021f: Link UP Feb 13 20:06:07.578877 systemd-networkd[1407]: caliab4d358021f: Gained carrier Feb 13 20:06:07.579916 containerd[1476]: time="2025-02-13T20:06:07.579852074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:06:07.581087 containerd[1476]: time="2025-02-13T20:06:07.580880503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:06:07.581087 containerd[1476]: time="2025-02-13T20:06:07.581009526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:06:07.581087 containerd[1476]: time="2025-02-13T20:06:07.581034827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:07.581826 containerd[1476]: time="2025-02-13T20:06:07.581249729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:07.581826 containerd[1476]: time="2025-02-13T20:06:07.581568655Z" level=info msg="CreateContainer within sandbox \"e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.229 [INFO][4514] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--74cbt-eth0 coredns-6f6b679f8f- kube-system 8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6 872 0 2025-02-13 20:05:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-74cbt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab4d358021f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-74cbt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--74cbt-" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.229 [INFO][4514] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-74cbt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.259 [INFO][4535] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" HandleID="k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.267 [INFO][4535] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" HandleID="k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000474bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-74cbt", "timestamp":"2025-02-13 20:06:07.259150908 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.267 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.387 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.387 [INFO][4535] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.389 [INFO][4535] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.403 [INFO][4535] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.464 [INFO][4535] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.466 [INFO][4535] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.468 [INFO][4535] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.468 [INFO][4535] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.470 [INFO][4535] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4 Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.563 [INFO][4535] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.570 [INFO][4535] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.570 [INFO][4535] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" host="localhost" Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.570 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:07.601520 containerd[1476]: 2025-02-13 20:06:07.570 [INFO][4535] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" HandleID="k8s-pod-network.dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.602090 containerd[1476]: 2025-02-13 20:06:07.574 [INFO][4514] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-74cbt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--74cbt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-74cbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab4d358021f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:07.602090 containerd[1476]: 2025-02-13 20:06:07.574 [INFO][4514] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-74cbt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.602090 containerd[1476]: 2025-02-13 20:06:07.574 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab4d358021f ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-74cbt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.602090 containerd[1476]: 2025-02-13 20:06:07.579 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-74cbt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.602090 containerd[1476]: 2025-02-13 20:06:07.580 [INFO][4514] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-74cbt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--74cbt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4", Pod:"coredns-6f6b679f8f-74cbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab4d358021f", MAC:"52:84:46:cd:db:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:07.602090 containerd[1476]: 2025-02-13 20:06:07.591 [INFO][4514] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-74cbt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:07.608291 systemd[1]: Started sshd@11-10.0.0.159:22-10.0.0.1:59258.service - OpenSSH per-connection server daemon (10.0.0.1:59258). Feb 13 20:06:07.608720 containerd[1476]: time="2025-02-13T20:06:07.608540049Z" level=info msg="CreateContainer within sandbox \"e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f485c5a567c87d9a9093ad95de972a55096a332245142c5f19c09b105a23c9aa\"" Feb 13 20:06:07.611283 containerd[1476]: time="2025-02-13T20:06:07.610896145Z" level=info msg="StartContainer for \"f485c5a567c87d9a9093ad95de972a55096a332245142c5f19c09b105a23c9aa\"" Feb 13 20:06:07.614216 systemd[1]: Started cri-containerd-d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd.scope - libcontainer container d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd. Feb 13 20:06:07.632239 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:06:07.632936 containerd[1476]: time="2025-02-13T20:06:07.632407648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:06:07.632936 containerd[1476]: time="2025-02-13T20:06:07.632544457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:06:07.632936 containerd[1476]: time="2025-02-13T20:06:07.632572001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:07.632936 containerd[1476]: time="2025-02-13T20:06:07.632687307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:07.650007 systemd[1]: Started cri-containerd-f485c5a567c87d9a9093ad95de972a55096a332245142c5f19c09b105a23c9aa.scope - libcontainer container f485c5a567c87d9a9093ad95de972a55096a332245142c5f19c09b105a23c9aa. Feb 13 20:06:07.657906 sshd[4595]: Accepted publickey for core from 10.0.0.1 port 59258 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:07.659410 sshd[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:07.660964 systemd[1]: Started cri-containerd-dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4.scope - libcontainer container dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4. Feb 13 20:06:07.667007 systemd-logind[1462]: New session 12 of user core. Feb 13 20:06:07.667987 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:06:07.670971 systemd-networkd[1407]: calidedc00032b6: Gained IPv6LL Feb 13 20:06:07.679338 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:06:07.680489 containerd[1476]: time="2025-02-13T20:06:07.680455961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6c4c9887-84rgs,Uid:3abe8309-3cc9-4bb3-b3b5-ac30285bef0e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd\"" Feb 13 20:06:07.698534 containerd[1476]: time="2025-02-13T20:06:07.698466238Z" level=info msg="StartContainer for \"f485c5a567c87d9a9093ad95de972a55096a332245142c5f19c09b105a23c9aa\" returns successfully" Feb 13 20:06:07.706274 containerd[1476]: time="2025-02-13T20:06:07.706243070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-74cbt,Uid:8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6,Namespace:kube-system,Attempt:1,} returns sandbox id \"dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4\"" Feb 13 20:06:07.706993 kubelet[2525]: E0213 20:06:07.706947 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:07.709086 containerd[1476]: time="2025-02-13T20:06:07.709059981Z" level=info msg="CreateContainer within sandbox \"dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:06:07.723987 containerd[1476]: time="2025-02-13T20:06:07.723936665Z" level=info msg="CreateContainer within sandbox \"dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e986dc096e34a936f7fc761e155ad51cd5e61687aa3d001f0f763f8dc949b8a\"" Feb 13 20:06:07.724374 containerd[1476]: time="2025-02-13T20:06:07.724329978Z" level=info msg="StartContainer for \"8e986dc096e34a936f7fc761e155ad51cd5e61687aa3d001f0f763f8dc949b8a\"" Feb 13 20:06:07.756109 systemd[1]: Started cri-containerd-8e986dc096e34a936f7fc761e155ad51cd5e61687aa3d001f0f763f8dc949b8a.scope - libcontainer container 8e986dc096e34a936f7fc761e155ad51cd5e61687aa3d001f0f763f8dc949b8a. Feb 13 20:06:07.789383 containerd[1476]: time="2025-02-13T20:06:07.789323376Z" level=info msg="StartContainer for \"8e986dc096e34a936f7fc761e155ad51cd5e61687aa3d001f0f763f8dc949b8a\" returns successfully" Feb 13 20:06:07.812292 sshd[4595]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:07.822996 systemd[1]: sshd@11-10.0.0.159:22-10.0.0.1:59258.service: Deactivated successfully. Feb 13 20:06:07.826008 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:06:07.828479 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:06:07.837117 systemd[1]: Started sshd@12-10.0.0.159:22-10.0.0.1:59260.service - OpenSSH per-connection server daemon (10.0.0.1:59260). Feb 13 20:06:07.838277 systemd-logind[1462]: Removed session 12. Feb 13 20:06:07.874284 sshd[4753]: Accepted publickey for core from 10.0.0.1 port 59260 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:07.875772 sshd[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:07.879816 systemd-logind[1462]: New session 13 of user core. Feb 13 20:06:07.887911 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:06:08.027174 containerd[1476]: time="2025-02-13T20:06:08.027033476Z" level=info msg="StopPodSandbox for \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\"" Feb 13 20:06:08.151884 systemd[1]: Started sshd@13-10.0.0.159:22-10.0.0.1:59262.service - OpenSSH per-connection server daemon (10.0.0.1:59262). Feb 13 20:06:08.182287 sshd[4753]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:08.187571 systemd[1]: sshd@12-10.0.0.159:22-10.0.0.1:59260.service: Deactivated successfully. Feb 13 20:06:08.192958 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:06:08.193693 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:06:08.194799 systemd-logind[1462]: Removed session 13. Feb 13 20:06:08.289241 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 59262 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:08.291041 sshd[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:08.357822 systemd-logind[1462]: New session 14 of user core. Feb 13 20:06:08.366910 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.328 [INFO][4778] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.328 [INFO][4778] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" iface="eth0" netns="/var/run/netns/cni-7d06e125-0fbf-5c11-e25b-f5dfda0da6de" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.329 [INFO][4778] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" iface="eth0" netns="/var/run/netns/cni-7d06e125-0fbf-5c11-e25b-f5dfda0da6de" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.329 [INFO][4778] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" iface="eth0" netns="/var/run/netns/cni-7d06e125-0fbf-5c11-e25b-f5dfda0da6de" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.329 [INFO][4778] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.329 [INFO][4778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.365 [INFO][4791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.365 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.365 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.461 [WARNING][4791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.462 [INFO][4791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.464 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:08.471637 containerd[1476]: 2025-02-13 20:06:08.468 [INFO][4778] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:08.475318 containerd[1476]: time="2025-02-13T20:06:08.474882439Z" level=info msg="TearDown network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\" successfully" Feb 13 20:06:08.475318 containerd[1476]: time="2025-02-13T20:06:08.474911757Z" level=info msg="StopPodSandbox for \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\" returns successfully" Feb 13 20:06:08.475569 containerd[1476]: time="2025-02-13T20:06:08.475524509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-765dc7d966-wsz49,Uid:bdc93042-efe0-448d-ba9e-d249c9f9fc78,Namespace:calico-system,Attempt:1,}" Feb 13 20:06:08.476006 systemd[1]: run-netns-cni\x2d7d06e125\x2d0fbf\x2d5c11\x2de25b\x2df5dfda0da6de.mount: Deactivated successfully. Feb 13 20:06:08.498292 kubelet[2525]: E0213 20:06:08.497914 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:08.499719 kubelet[2525]: E0213 20:06:08.499694 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:08.543689 kubelet[2525]: I0213 20:06:08.542471 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-74cbt" podStartSLOduration=35.542457153 podStartE2EDuration="35.542457153s" podCreationTimestamp="2025-02-13 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:06:08.541715338 +0000 UTC m=+41.600989502" watchObservedRunningTime="2025-02-13 20:06:08.542457153 +0000 UTC m=+41.601731307" Feb 13 20:06:08.544381 sshd[4787]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:08.561869 systemd[1]: sshd@13-10.0.0.159:22-10.0.0.1:59262.service: Deactivated successfully. Feb 13 20:06:08.564467 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:06:08.567153 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:06:08.568639 systemd-logind[1462]: Removed session 14. Feb 13 20:06:08.757801 systemd-networkd[1407]: cali91bcea0119d: Link UP Feb 13 20:06:08.758399 systemd-networkd[1407]: cali91bcea0119d: Gained carrier Feb 13 20:06:08.763578 systemd-networkd[1407]: cali50e295ea44b: Gained IPv6LL Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.692 [INFO][4809] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0 calico-kube-controllers-765dc7d966- calico-system bdc93042-efe0-448d-ba9e-d249c9f9fc78 902 0 2025-02-13 20:05:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:765dc7d966 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-765dc7d966-wsz49 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali91bcea0119d [] []}} ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Namespace="calico-system" Pod="calico-kube-controllers-765dc7d966-wsz49" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.693 [INFO][4809] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Namespace="calico-system" Pod="calico-kube-controllers-765dc7d966-wsz49" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.722 [INFO][4829] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" HandleID="k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.729 [INFO][4829] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" HandleID="k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005036e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-765dc7d966-wsz49", "timestamp":"2025-02-13 20:06:08.722716898 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.729 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.729 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.729 [INFO][4829] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.731 [INFO][4829] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.735 [INFO][4829] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.739 [INFO][4829] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.740 [INFO][4829] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.742 [INFO][4829] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.742 [INFO][4829] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.743 [INFO][4829] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33 Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.746 [INFO][4829] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.752 [INFO][4829] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.752 [INFO][4829] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" host="localhost" Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.752 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:08.771519 containerd[1476]: 2025-02-13 20:06:08.752 [INFO][4829] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" HandleID="k8s-pod-network.feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.772266 containerd[1476]: 2025-02-13 20:06:08.755 [INFO][4809] cni-plugin/k8s.go 386: Populated endpoint ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Namespace="calico-system" Pod="calico-kube-controllers-765dc7d966-wsz49" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0", GenerateName:"calico-kube-controllers-765dc7d966-", Namespace:"calico-system", SelfLink:"", UID:"bdc93042-efe0-448d-ba9e-d249c9f9fc78", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"765dc7d966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-765dc7d966-wsz49", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91bcea0119d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:08.772266 containerd[1476]: 2025-02-13 20:06:08.755 [INFO][4809] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Namespace="calico-system" Pod="calico-kube-controllers-765dc7d966-wsz49" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.772266 containerd[1476]: 2025-02-13 20:06:08.755 [INFO][4809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91bcea0119d ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Namespace="calico-system" Pod="calico-kube-controllers-765dc7d966-wsz49" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.772266 containerd[1476]: 2025-02-13 20:06:08.758 [INFO][4809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Namespace="calico-system" Pod="calico-kube-controllers-765dc7d966-wsz49" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.772266 containerd[1476]: 2025-02-13 20:06:08.759 [INFO][4809] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Namespace="calico-system" Pod="calico-kube-controllers-765dc7d966-wsz49" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0", GenerateName:"calico-kube-controllers-765dc7d966-", Namespace:"calico-system", SelfLink:"", UID:"bdc93042-efe0-448d-ba9e-d249c9f9fc78", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"765dc7d966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33", Pod:"calico-kube-controllers-765dc7d966-wsz49", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91bcea0119d", MAC:"3a:86:6a:d9:ad:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:08.772266 containerd[1476]: 2025-02-13 20:06:08.767 [INFO][4809] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33" Namespace="calico-system" Pod="calico-kube-controllers-765dc7d966-wsz49" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:08.790329 containerd[1476]: time="2025-02-13T20:06:08.790207647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:06:08.790329 containerd[1476]: time="2025-02-13T20:06:08.790271141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:06:08.790329 containerd[1476]: time="2025-02-13T20:06:08.790286422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:08.791108 containerd[1476]: time="2025-02-13T20:06:08.791058096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:06:08.819925 systemd[1]: Started cri-containerd-feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33.scope - libcontainer container feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33. Feb 13 20:06:08.831626 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:06:08.854602 containerd[1476]: time="2025-02-13T20:06:08.854556342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-765dc7d966-wsz49,Uid:bdc93042-efe0-448d-ba9e-d249c9f9fc78,Namespace:calico-system,Attempt:1,} returns sandbox id \"feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33\"" Feb 13 20:06:09.398937 systemd-networkd[1407]: caliab4d358021f: Gained IPv6LL Feb 13 20:06:09.503174 kubelet[2525]: E0213 20:06:09.503135 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:09.976585 systemd-networkd[1407]: cali91bcea0119d: Gained IPv6LL Feb 13 20:06:10.504457 kubelet[2525]: E0213 20:06:10.504415 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:10.587584 containerd[1476]: time="2025-02-13T20:06:10.587540478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:10.588346 containerd[1476]: time="2025-02-13T20:06:10.588282973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:06:10.589458 containerd[1476]: time="2025-02-13T20:06:10.589430380Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:10.591652 containerd[1476]: time="2025-02-13T20:06:10.591613647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:10.592225 containerd[1476]: time="2025-02-13T20:06:10.592188754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.012266622s" Feb 13 20:06:10.592225 containerd[1476]: time="2025-02-13T20:06:10.592217861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:06:10.593190 containerd[1476]: time="2025-02-13T20:06:10.593160228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:06:10.594155 containerd[1476]: time="2025-02-13T20:06:10.594119106Z" level=info msg="CreateContainer within sandbox \"f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:06:10.607248 containerd[1476]: time="2025-02-13T20:06:10.607204109Z" level=info msg="CreateContainer within sandbox \"f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3adc169d263e4c5777491810e00ff0c03ceff26f117330940c1c2c4a3c1be1f8\"" Feb 13 20:06:10.607649 containerd[1476]: time="2025-02-13T20:06:10.607614402Z" level=info msg="StartContainer for \"3adc169d263e4c5777491810e00ff0c03ceff26f117330940c1c2c4a3c1be1f8\"" Feb 13 20:06:10.638957 systemd[1]: Started cri-containerd-3adc169d263e4c5777491810e00ff0c03ceff26f117330940c1c2c4a3c1be1f8.scope - libcontainer container 3adc169d263e4c5777491810e00ff0c03ceff26f117330940c1c2c4a3c1be1f8. Feb 13 20:06:10.676324 containerd[1476]: time="2025-02-13T20:06:10.676284185Z" level=info msg="StartContainer for \"3adc169d263e4c5777491810e00ff0c03ceff26f117330940c1c2c4a3c1be1f8\" returns successfully" Feb 13 20:06:11.070517 containerd[1476]: time="2025-02-13T20:06:11.070447003Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:11.071207 containerd[1476]: time="2025-02-13T20:06:11.071160169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:06:11.073461 containerd[1476]: time="2025-02-13T20:06:11.073095728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 479.908789ms" Feb 13 20:06:11.073461 containerd[1476]: time="2025-02-13T20:06:11.073125156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:06:11.075966 containerd[1476]: time="2025-02-13T20:06:11.075313319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:06:11.077001 containerd[1476]: time="2025-02-13T20:06:11.076950855Z" level=info msg="CreateContainer within sandbox \"d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:06:11.094223 containerd[1476]: time="2025-02-13T20:06:11.094173199Z" level=info msg="CreateContainer within sandbox \"d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe0ec8723aecfc38da45c9f6882c1f2873e7cbcc6892c8b55fc4bdbede84d3d0\"" Feb 13 20:06:11.094828 containerd[1476]: time="2025-02-13T20:06:11.094740490Z" level=info msg="StartContainer for \"fe0ec8723aecfc38da45c9f6882c1f2873e7cbcc6892c8b55fc4bdbede84d3d0\"" Feb 13 20:06:11.121928 systemd[1]: Started cri-containerd-fe0ec8723aecfc38da45c9f6882c1f2873e7cbcc6892c8b55fc4bdbede84d3d0.scope - libcontainer container fe0ec8723aecfc38da45c9f6882c1f2873e7cbcc6892c8b55fc4bdbede84d3d0. Feb 13 20:06:11.162153 containerd[1476]: time="2025-02-13T20:06:11.162108328Z" level=info msg="StartContainer for \"fe0ec8723aecfc38da45c9f6882c1f2873e7cbcc6892c8b55fc4bdbede84d3d0\" returns successfully" Feb 13 20:06:11.518412 kubelet[2525]: I0213 20:06:11.518142 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b6c4c9887-nqgk9" podStartSLOduration=28.384653343 podStartE2EDuration="33.518125563s" podCreationTimestamp="2025-02-13 20:05:38 +0000 UTC" firstStartedPulling="2025-02-13 20:06:05.459514507 +0000 UTC m=+38.518788661" lastFinishedPulling="2025-02-13 20:06:10.592986727 +0000 UTC m=+43.652260881" observedRunningTime="2025-02-13 20:06:11.517855865 +0000 UTC m=+44.577130029" watchObservedRunningTime="2025-02-13 20:06:11.518125563 +0000 UTC m=+44.577399717" Feb 13 20:06:11.531218 kubelet[2525]: I0213 20:06:11.530731 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b6c4c9887-84rgs" podStartSLOduration=30.139064086 podStartE2EDuration="33.530713947s" podCreationTimestamp="2025-02-13 20:05:38 +0000 UTC" firstStartedPulling="2025-02-13 20:06:07.683381096 +0000 UTC m=+40.742655250" lastFinishedPulling="2025-02-13 20:06:11.075030957 +0000 UTC m=+44.134305111" observedRunningTime="2025-02-13 20:06:11.530613409 +0000 UTC m=+44.589887563" watchObservedRunningTime="2025-02-13 20:06:11.530713947 +0000 UTC m=+44.589988101" Feb 13 20:06:12.512558 kubelet[2525]: I0213 20:06:12.512521 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:06:12.512558 kubelet[2525]: I0213 20:06:12.512553 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:06:13.079166 containerd[1476]: time="2025-02-13T20:06:13.079120161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:13.079946 containerd[1476]: time="2025-02-13T20:06:13.079903602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:06:13.081092 containerd[1476]: time="2025-02-13T20:06:13.081063660Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:13.083521 containerd[1476]: time="2025-02-13T20:06:13.083484533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:13.084069 containerd[1476]: time="2025-02-13T20:06:13.084042043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.008694276s" Feb 13 20:06:13.084114 containerd[1476]: time="2025-02-13T20:06:13.084074165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:06:13.084963 containerd[1476]: time="2025-02-13T20:06:13.084923616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:06:13.085883 containerd[1476]: time="2025-02-13T20:06:13.085854596Z" level=info msg="CreateContainer within sandbox \"e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:06:13.101229 containerd[1476]: time="2025-02-13T20:06:13.101195049Z" level=info msg="CreateContainer within sandbox \"e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8b1c3e77975837b344f16731cc8ee6a7a3fcfbdd68167239408b66d1c651f25a\"" Feb 13 20:06:13.101661 containerd[1476]: time="2025-02-13T20:06:13.101552387Z" level=info msg="StartContainer for \"8b1c3e77975837b344f16731cc8ee6a7a3fcfbdd68167239408b66d1c651f25a\"" Feb 13 20:06:13.132001 systemd[1]: Started cri-containerd-8b1c3e77975837b344f16731cc8ee6a7a3fcfbdd68167239408b66d1c651f25a.scope - libcontainer container 8b1c3e77975837b344f16731cc8ee6a7a3fcfbdd68167239408b66d1c651f25a. Feb 13 20:06:13.159369 containerd[1476]: time="2025-02-13T20:06:13.159275289Z" level=info msg="StartContainer for \"8b1c3e77975837b344f16731cc8ee6a7a3fcfbdd68167239408b66d1c651f25a\" returns successfully" Feb 13 20:06:13.553852 systemd[1]: Started sshd@14-10.0.0.159:22-10.0.0.1:59272.service - OpenSSH per-connection server daemon (10.0.0.1:59272). Feb 13 20:06:13.599991 sshd[5031]: Accepted publickey for core from 10.0.0.1 port 59272 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:13.601649 sshd[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:13.605382 systemd-logind[1462]: New session 15 of user core. Feb 13 20:06:13.611946 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:06:13.729074 sshd[5031]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:13.733389 systemd[1]: sshd@14-10.0.0.159:22-10.0.0.1:59272.service: Deactivated successfully. Feb 13 20:06:13.735393 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:06:13.736989 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:06:13.737832 systemd-logind[1462]: Removed session 15. Feb 13 20:06:14.176473 kubelet[2525]: I0213 20:06:14.176431 2525 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:06:14.176473 kubelet[2525]: I0213 20:06:14.176472 2525 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:06:15.625222 containerd[1476]: time="2025-02-13T20:06:15.625172179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:15.625890 containerd[1476]: time="2025-02-13T20:06:15.625825826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:06:15.626990 containerd[1476]: time="2025-02-13T20:06:15.626954680Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:15.628913 containerd[1476]: time="2025-02-13T20:06:15.628875151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:06:15.629465 containerd[1476]: time="2025-02-13T20:06:15.629427699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.544470087s" Feb 13 20:06:15.629503 containerd[1476]: time="2025-02-13T20:06:15.629468460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:06:15.639303 containerd[1476]: time="2025-02-13T20:06:15.639269580Z" level=info msg="CreateContainer within sandbox \"feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:06:15.657702 containerd[1476]: time="2025-02-13T20:06:15.657667257Z" level=info msg="CreateContainer within sandbox \"feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a21434d75204fb890f02968378df5f923266e3d5484efa2b4a133ba0b196e858\"" Feb 13 20:06:15.658161 containerd[1476]: time="2025-02-13T20:06:15.658117206Z" level=info msg="StartContainer for \"a21434d75204fb890f02968378df5f923266e3d5484efa2b4a133ba0b196e858\"" Feb 13 20:06:15.687920 systemd[1]: Started cri-containerd-a21434d75204fb890f02968378df5f923266e3d5484efa2b4a133ba0b196e858.scope - libcontainer container a21434d75204fb890f02968378df5f923266e3d5484efa2b4a133ba0b196e858. Feb 13 20:06:15.724652 containerd[1476]: time="2025-02-13T20:06:15.724615598Z" level=info msg="StartContainer for \"a21434d75204fb890f02968378df5f923266e3d5484efa2b4a133ba0b196e858\" returns successfully" Feb 13 20:06:16.533076 kubelet[2525]: I0213 20:06:16.532955 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rlkfd" podStartSLOduration=29.788038476 podStartE2EDuration="37.532938566s" podCreationTimestamp="2025-02-13 20:05:39 +0000 UTC" firstStartedPulling="2025-02-13 20:06:05.33985189 +0000 UTC m=+38.399126034" lastFinishedPulling="2025-02-13 20:06:13.08475197 +0000 UTC m=+46.144026124" observedRunningTime="2025-02-13 20:06:13.526344635 +0000 UTC m=+46.585618799" watchObservedRunningTime="2025-02-13 20:06:16.532938566 +0000 UTC m=+49.592212720" Feb 13 20:06:16.533076 kubelet[2525]: I0213 20:06:16.533072 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-765dc7d966-wsz49" podStartSLOduration=30.758575461 podStartE2EDuration="37.533067638s" podCreationTimestamp="2025-02-13 20:05:39 +0000 UTC" firstStartedPulling="2025-02-13 20:06:08.855696468 +0000 UTC m=+41.914970623" lastFinishedPulling="2025-02-13 20:06:15.630188656 +0000 UTC m=+48.689462800" observedRunningTime="2025-02-13 20:06:16.532303747 +0000 UTC m=+49.591577901" watchObservedRunningTime="2025-02-13 20:06:16.533067638 +0000 UTC m=+49.592341792" Feb 13 20:06:17.524541 kubelet[2525]: I0213 20:06:17.524507 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:06:17.674095 kubelet[2525]: I0213 20:06:17.674054 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:06:18.742728 systemd[1]: Started sshd@15-10.0.0.159:22-10.0.0.1:56282.service - OpenSSH per-connection server daemon (10.0.0.1:56282). Feb 13 20:06:18.786106 sshd[5126]: Accepted publickey for core from 10.0.0.1 port 56282 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:18.787543 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:18.791641 systemd-logind[1462]: New session 16 of user core. Feb 13 20:06:18.800913 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:06:18.918665 sshd[5126]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:18.922976 systemd[1]: sshd@15-10.0.0.159:22-10.0.0.1:56282.service: Deactivated successfully. Feb 13 20:06:18.925076 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:06:18.925639 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:06:18.926477 systemd-logind[1462]: Removed session 16. Feb 13 20:06:19.598981 kubelet[2525]: I0213 20:06:19.598847 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:06:23.930856 systemd[1]: Started sshd@16-10.0.0.159:22-10.0.0.1:56290.service - OpenSSH per-connection server daemon (10.0.0.1:56290). Feb 13 20:06:23.971813 sshd[5187]: Accepted publickey for core from 10.0.0.1 port 56290 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:23.973328 sshd[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:23.977209 systemd-logind[1462]: New session 17 of user core. Feb 13 20:06:23.985911 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:06:24.095374 sshd[5187]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:24.100731 systemd[1]: sshd@16-10.0.0.159:22-10.0.0.1:56290.service: Deactivated successfully. Feb 13 20:06:24.102849 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:06:24.103495 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:06:24.104366 systemd-logind[1462]: Removed session 17. Feb 13 20:06:27.014730 containerd[1476]: time="2025-02-13T20:06:27.014680147Z" level=info msg="StopPodSandbox for \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\"" Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.046 [WARNING][5216] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rlkfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede", Pod:"csi-node-driver-rlkfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali042991a0f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.046 [INFO][5216] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.046 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" iface="eth0" netns="" Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.046 [INFO][5216] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.046 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.065 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.065 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.065 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.070 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.070 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.072 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.077076 containerd[1476]: 2025-02-13 20:06:27.074 [INFO][5216] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:27.077919 containerd[1476]: time="2025-02-13T20:06:27.077106180Z" level=info msg="TearDown network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\" successfully" Feb 13 20:06:27.077919 containerd[1476]: time="2025-02-13T20:06:27.077128361Z" level=info msg="StopPodSandbox for \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\" returns successfully" Feb 13 20:06:27.084108 containerd[1476]: time="2025-02-13T20:06:27.084064935Z" level=info msg="RemovePodSandbox for \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\"" Feb 13 20:06:27.086216 containerd[1476]: time="2025-02-13T20:06:27.086181358Z" level=info msg="Forcibly stopping sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\"" Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.119 [WARNING][5250] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rlkfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c798cf42-a2d5-48e9-9db3-eab6f1d0ef23", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0476a9a5d29bf47a7785504185f5f5d2bf40c67b60ef6b726170aa71bb71ede", Pod:"csi-node-driver-rlkfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali042991a0f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.119 [INFO][5250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.119 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" iface="eth0" netns="" Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.119 [INFO][5250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.120 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.141 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.141 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.141 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.146 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.146 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" HandleID="k8s-pod-network.20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Workload="localhost-k8s-csi--node--driver--rlkfd-eth0" Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.147 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.153493 containerd[1476]: 2025-02-13 20:06:27.150 [INFO][5250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be" Feb 13 20:06:27.154097 containerd[1476]: time="2025-02-13T20:06:27.153525145Z" level=info msg="TearDown network for sandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\" successfully" Feb 13 20:06:27.159520 containerd[1476]: time="2025-02-13T20:06:27.159469560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:06:27.159520 containerd[1476]: time="2025-02-13T20:06:27.159516498Z" level=info msg="RemovePodSandbox \"20d13c6f4c3b7bab48f26b5b79bd39ae1cda67d50f8e7a94fb92add53c3515be\" returns successfully" Feb 13 20:06:27.160175 containerd[1476]: time="2025-02-13T20:06:27.160140650Z" level=info msg="StopPodSandbox for \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\"" Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.191 [WARNING][5279] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0", GenerateName:"calico-apiserver-b6c4c9887-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6f20208-7647-41bb-a81f-be6437dee785", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6c4c9887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae", Pod:"calico-apiserver-b6c4c9887-nqgk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27e0704bf72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.191 [INFO][5279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.191 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" iface="eth0" netns="" Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.191 [INFO][5279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.191 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.209 [INFO][5286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.209 [INFO][5286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.209 [INFO][5286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.213 [WARNING][5286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.213 [INFO][5286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.214 [INFO][5286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.218751 containerd[1476]: 2025-02-13 20:06:27.216 [INFO][5279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:27.219305 containerd[1476]: time="2025-02-13T20:06:27.218808260Z" level=info msg="TearDown network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\" successfully" Feb 13 20:06:27.219305 containerd[1476]: time="2025-02-13T20:06:27.218833969Z" level=info msg="StopPodSandbox for \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\" returns successfully" Feb 13 20:06:27.219378 containerd[1476]: time="2025-02-13T20:06:27.219340713Z" level=info msg="RemovePodSandbox for \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\"" Feb 13 20:06:27.219406 containerd[1476]: time="2025-02-13T20:06:27.219378653Z" level=info msg="Forcibly stopping sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\"" Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.249 [WARNING][5309] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0", GenerateName:"calico-apiserver-b6c4c9887-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6f20208-7647-41bb-a81f-be6437dee785", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6c4c9887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8700647dfe8a59c5fcb7a10368790da4e6a33dd22298b34957abda2ebff11ae", Pod:"calico-apiserver-b6c4c9887-nqgk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27e0704bf72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.249 [INFO][5309] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.249 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" iface="eth0" netns="" Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.249 [INFO][5309] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.249 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.268 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.269 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.269 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.273 [WARNING][5317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.273 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" HandleID="k8s-pod-network.21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Workload="localhost-k8s-calico--apiserver--b6c4c9887--nqgk9-eth0" Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.274 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.279459 containerd[1476]: 2025-02-13 20:06:27.277 [INFO][5309] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800" Feb 13 20:06:27.279459 containerd[1476]: time="2025-02-13T20:06:27.279420985Z" level=info msg="TearDown network for sandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\" successfully" Feb 13 20:06:27.283288 containerd[1476]: time="2025-02-13T20:06:27.283268854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:06:27.283341 containerd[1476]: time="2025-02-13T20:06:27.283307787Z" level=info msg="RemovePodSandbox \"21e21361291776fa63d42feca3ed3a2c6e4c14ab7486987e2abf034917ff1800\" returns successfully" Feb 13 20:06:27.283763 containerd[1476]: time="2025-02-13T20:06:27.283727239Z" level=info msg="StopPodSandbox for \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\"" Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.314 [WARNING][5339] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--plm2t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cd3ea36b-ba97-45d6-8dea-55fb8150ac30", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205", Pod:"coredns-6f6b679f8f-plm2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidedc00032b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.315 [INFO][5339] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.315 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" iface="eth0" netns="" Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.315 [INFO][5339] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.315 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.334 [INFO][5346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.334 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.334 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.338 [WARNING][5346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.338 [INFO][5346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.339 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.344415 containerd[1476]: 2025-02-13 20:06:27.342 [INFO][5339] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:27.344841 containerd[1476]: time="2025-02-13T20:06:27.344450098Z" level=info msg="TearDown network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\" successfully" Feb 13 20:06:27.344841 containerd[1476]: time="2025-02-13T20:06:27.344475455Z" level=info msg="StopPodSandbox for \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\" returns successfully" Feb 13 20:06:27.344991 containerd[1476]: time="2025-02-13T20:06:27.344951573Z" level=info msg="RemovePodSandbox for \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\"" Feb 13 20:06:27.344991 containerd[1476]: time="2025-02-13T20:06:27.344990745Z" level=info msg="Forcibly stopping sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\"" Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.392 [WARNING][5368] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--plm2t-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cd3ea36b-ba97-45d6-8dea-55fb8150ac30", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73723320dd2f7d6f89fc8e980a37ee9739aa148170ce31e153640fd5bf99c205", Pod:"coredns-6f6b679f8f-plm2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidedc00032b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.393 [INFO][5368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.393 [INFO][5368] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" iface="eth0" netns="" Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.393 [INFO][5368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.393 [INFO][5368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.413 [INFO][5376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.413 [INFO][5376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.413 [INFO][5376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.417 [WARNING][5376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.417 [INFO][5376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" HandleID="k8s-pod-network.cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Workload="localhost-k8s-coredns--6f6b679f8f--plm2t-eth0" Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.419 [INFO][5376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.423842 containerd[1476]: 2025-02-13 20:06:27.421 [INFO][5368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea" Feb 13 20:06:27.424251 containerd[1476]: time="2025-02-13T20:06:27.423875935Z" level=info msg="TearDown network for sandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\" successfully" Feb 13 20:06:27.427672 containerd[1476]: time="2025-02-13T20:06:27.427637214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:06:27.427724 containerd[1476]: time="2025-02-13T20:06:27.427681225Z" level=info msg="RemovePodSandbox \"cac01b53676d49728564b54c19169cf5154d50b090316b905a802496ba25f2ea\" returns successfully" Feb 13 20:06:27.428249 containerd[1476]: time="2025-02-13T20:06:27.428208408Z" level=info msg="StopPodSandbox for \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\"" Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.461 [WARNING][5398] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0", GenerateName:"calico-apiserver-b6c4c9887-", Namespace:"calico-apiserver", SelfLink:"", UID:"3abe8309-3cc9-4bb3-b3b5-ac30285bef0e", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6c4c9887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd", Pod:"calico-apiserver-b6c4c9887-84rgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50e295ea44b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.461 [INFO][5398] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.461 [INFO][5398] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" iface="eth0" netns="" Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.461 [INFO][5398] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.461 [INFO][5398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.481 [INFO][5405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.481 [INFO][5405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.481 [INFO][5405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.485 [WARNING][5405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.485 [INFO][5405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.486 [INFO][5405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.491512 containerd[1476]: 2025-02-13 20:06:27.489 [INFO][5398] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:27.491941 containerd[1476]: time="2025-02-13T20:06:27.491539116Z" level=info msg="TearDown network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\" successfully" Feb 13 20:06:27.491941 containerd[1476]: time="2025-02-13T20:06:27.491577217Z" level=info msg="StopPodSandbox for \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\" returns successfully" Feb 13 20:06:27.492061 containerd[1476]: time="2025-02-13T20:06:27.492033187Z" level=info msg="RemovePodSandbox for \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\"" Feb 13 20:06:27.492093 containerd[1476]: time="2025-02-13T20:06:27.492059306Z" level=info msg="Forcibly stopping sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\"" Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.525 [WARNING][5428] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0", GenerateName:"calico-apiserver-b6c4c9887-", Namespace:"calico-apiserver", SelfLink:"", UID:"3abe8309-3cc9-4bb3-b3b5-ac30285bef0e", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6c4c9887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3b2628d6d8fe1799f7ade79bab5108d0633d0b5ac913c61f195f9f9f45fa9bd", Pod:"calico-apiserver-b6c4c9887-84rgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50e295ea44b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.526 [INFO][5428] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.526 [INFO][5428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" iface="eth0" netns="" Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.526 [INFO][5428] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.526 [INFO][5428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.545 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.545 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.545 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.550 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.550 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" HandleID="k8s-pod-network.8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Workload="localhost-k8s-calico--apiserver--b6c4c9887--84rgs-eth0" Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.551 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.556801 containerd[1476]: 2025-02-13 20:06:27.554 [INFO][5428] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7" Feb 13 20:06:27.556801 containerd[1476]: time="2025-02-13T20:06:27.556739849Z" level=info msg="TearDown network for sandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\" successfully" Feb 13 20:06:27.560941 containerd[1476]: time="2025-02-13T20:06:27.560915159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:06:27.561009 containerd[1476]: time="2025-02-13T20:06:27.560963027Z" level=info msg="RemovePodSandbox \"8b2f5f55c6dfa6374f910965de0392f05886828b3a20f2673d0a66ac2341c6a7\" returns successfully" Feb 13 20:06:27.561338 containerd[1476]: time="2025-02-13T20:06:27.561297881Z" level=info msg="StopPodSandbox for \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\"" Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.593 [WARNING][5457] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0", GenerateName:"calico-kube-controllers-765dc7d966-", Namespace:"calico-system", SelfLink:"", UID:"bdc93042-efe0-448d-ba9e-d249c9f9fc78", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"765dc7d966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33", Pod:"calico-kube-controllers-765dc7d966-wsz49", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91bcea0119d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.593 [INFO][5457] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.593 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" iface="eth0" netns="" Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.593 [INFO][5457] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.593 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.613 [INFO][5464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.614 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.614 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.618 [WARNING][5464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.618 [INFO][5464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.619 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.624012 containerd[1476]: 2025-02-13 20:06:27.621 [INFO][5457] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:27.624467 containerd[1476]: time="2025-02-13T20:06:27.624049742Z" level=info msg="TearDown network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\" successfully" Feb 13 20:06:27.624467 containerd[1476]: time="2025-02-13T20:06:27.624074237Z" level=info msg="StopPodSandbox for \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\" returns successfully" Feb 13 20:06:27.624648 containerd[1476]: time="2025-02-13T20:06:27.624604195Z" level=info msg="RemovePodSandbox for \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\"" Feb 13 20:06:27.624648 containerd[1476]: time="2025-02-13T20:06:27.624647295Z" level=info msg="Forcibly stopping sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\"" Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.657 [WARNING][5486] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0", GenerateName:"calico-kube-controllers-765dc7d966-", Namespace:"calico-system", SelfLink:"", UID:"bdc93042-efe0-448d-ba9e-d249c9f9fc78", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"765dc7d966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb11ba226d3337c91823b0ce773d62ed1c14ec098cc537280e38aa4e132ac33", Pod:"calico-kube-controllers-765dc7d966-wsz49", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91bcea0119d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.657 [INFO][5486] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.657 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" iface="eth0" netns="" Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.657 [INFO][5486] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.657 [INFO][5486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.678 [INFO][5494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.678 [INFO][5494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.678 [INFO][5494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.682 [WARNING][5494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.682 [INFO][5494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" HandleID="k8s-pod-network.d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Workload="localhost-k8s-calico--kube--controllers--765dc7d966--wsz49-eth0" Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.684 [INFO][5494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.689026 containerd[1476]: 2025-02-13 20:06:27.686 [INFO][5486] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c" Feb 13 20:06:27.689433 containerd[1476]: time="2025-02-13T20:06:27.689067162Z" level=info msg="TearDown network for sandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\" successfully" Feb 13 20:06:27.692763 containerd[1476]: time="2025-02-13T20:06:27.692723475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:06:27.692822 containerd[1476]: time="2025-02-13T20:06:27.692770903Z" level=info msg="RemovePodSandbox \"d7efc7880b8322a912b102654437e93c388b1a534ebbe53d9226b41cb5c3684c\" returns successfully" Feb 13 20:06:27.693281 containerd[1476]: time="2025-02-13T20:06:27.693247572Z" level=info msg="StopPodSandbox for \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\"" Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.726 [WARNING][5516] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--74cbt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4", Pod:"coredns-6f6b679f8f-74cbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab4d358021f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.726 [INFO][5516] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.726 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" iface="eth0" netns="" Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.726 [INFO][5516] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.727 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.746 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.746 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.746 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.750 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.750 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.752 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.756309 containerd[1476]: 2025-02-13 20:06:27.754 [INFO][5516] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:27.756836 containerd[1476]: time="2025-02-13T20:06:27.756326541Z" level=info msg="TearDown network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\" successfully" Feb 13 20:06:27.756836 containerd[1476]: time="2025-02-13T20:06:27.756358621Z" level=info msg="StopPodSandbox for \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\" returns successfully" Feb 13 20:06:27.756923 containerd[1476]: time="2025-02-13T20:06:27.756884000Z" level=info msg="RemovePodSandbox for \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\"" Feb 13 20:06:27.756923 containerd[1476]: time="2025-02-13T20:06:27.756921640Z" level=info msg="Forcibly stopping sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\"" Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.790 [WARNING][5545] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--74cbt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8fdc04b2-cc59-4f83-90f5-3dfdfbb973a6", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd12ebfba9a9105234338e9be3c9d68f150a044a33b282fd145530f2f0589eb4", Pod:"coredns-6f6b679f8f-74cbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab4d358021f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.791 [INFO][5545] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.791 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" iface="eth0" netns="" Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.791 [INFO][5545] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.791 [INFO][5545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.810 [INFO][5553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.810 [INFO][5553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.810 [INFO][5553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.814 [WARNING][5553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.814 [INFO][5553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" HandleID="k8s-pod-network.1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Workload="localhost-k8s-coredns--6f6b679f8f--74cbt-eth0" Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.815 [INFO][5553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:06:27.820693 containerd[1476]: 2025-02-13 20:06:27.818 [INFO][5545] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc" Feb 13 20:06:27.820693 containerd[1476]: time="2025-02-13T20:06:27.820656401Z" level=info msg="TearDown network for sandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\" successfully" Feb 13 20:06:27.824766 containerd[1476]: time="2025-02-13T20:06:27.824713631Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:06:27.824903 containerd[1476]: time="2025-02-13T20:06:27.824797347Z" level=info msg="RemovePodSandbox \"1db9ec978572d8deac8d2315e93430fe02c551581a0a9f8d15a79e2dfca0cbbc\" returns successfully" Feb 13 20:06:29.115378 systemd[1]: Started sshd@17-10.0.0.159:22-10.0.0.1:32894.service - OpenSSH per-connection server daemon (10.0.0.1:32894). Feb 13 20:06:29.158558 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 32894 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:29.160231 sshd[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:29.164281 systemd-logind[1462]: New session 18 of user core. Feb 13 20:06:29.174970 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:06:29.292391 sshd[5564]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:29.296595 systemd[1]: sshd@17-10.0.0.159:22-10.0.0.1:32894.service: Deactivated successfully. Feb 13 20:06:29.298480 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:06:29.299146 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:06:29.300100 systemd-logind[1462]: Removed session 18. Feb 13 20:06:33.500351 kubelet[2525]: I0213 20:06:33.500227 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:06:34.303769 systemd[1]: Started sshd@18-10.0.0.159:22-10.0.0.1:32898.service - OpenSSH per-connection server daemon (10.0.0.1:32898). Feb 13 20:06:34.341398 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 32898 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:34.342940 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:34.346449 systemd-logind[1462]: New session 19 of user core. Feb 13 20:06:34.361908 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:06:34.465394 sshd[5584]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:34.476847 systemd[1]: sshd@18-10.0.0.159:22-10.0.0.1:32898.service: Deactivated successfully. Feb 13 20:06:34.478894 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:06:34.480557 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:06:34.485014 systemd[1]: Started sshd@19-10.0.0.159:22-10.0.0.1:32908.service - OpenSSH per-connection server daemon (10.0.0.1:32908). Feb 13 20:06:34.485849 systemd-logind[1462]: Removed session 19. Feb 13 20:06:34.520077 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 32908 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:34.521676 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:34.525513 systemd-logind[1462]: New session 20 of user core. Feb 13 20:06:34.534915 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:06:34.716737 sshd[5598]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:34.725912 systemd[1]: sshd@19-10.0.0.159:22-10.0.0.1:32908.service: Deactivated successfully. Feb 13 20:06:34.727833 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:06:34.729527 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:06:34.734225 systemd[1]: Started sshd@20-10.0.0.159:22-10.0.0.1:32916.service - OpenSSH per-connection server daemon (10.0.0.1:32916). Feb 13 20:06:34.735055 systemd-logind[1462]: Removed session 20. Feb 13 20:06:34.769985 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 32916 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:34.771486 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:34.775369 systemd-logind[1462]: New session 21 of user core. Feb 13 20:06:34.784894 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:06:35.027543 kubelet[2525]: E0213 20:06:35.027433 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:36.347779 sshd[5610]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:36.365427 systemd[1]: Started sshd@21-10.0.0.159:22-10.0.0.1:32928.service - OpenSSH per-connection server daemon (10.0.0.1:32928). Feb 13 20:06:36.366158 systemd[1]: sshd@20-10.0.0.159:22-10.0.0.1:32916.service: Deactivated successfully. Feb 13 20:06:36.368375 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:06:36.369136 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:06:36.370427 systemd-logind[1462]: Removed session 21. Feb 13 20:06:36.404010 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 32928 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:36.405771 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:36.409847 systemd-logind[1462]: New session 22 of user core. Feb 13 20:06:36.419912 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:06:36.628104 sshd[5626]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:36.635752 systemd[1]: sshd@21-10.0.0.159:22-10.0.0.1:32928.service: Deactivated successfully. Feb 13 20:06:36.637716 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:06:36.639595 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:06:36.649017 systemd[1]: Started sshd@22-10.0.0.159:22-10.0.0.1:37682.service - OpenSSH per-connection server daemon (10.0.0.1:37682). Feb 13 20:06:36.649812 systemd-logind[1462]: Removed session 22. Feb 13 20:06:36.683267 sshd[5641]: Accepted publickey for core from 10.0.0.1 port 37682 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:36.684952 sshd[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:36.688745 systemd-logind[1462]: New session 23 of user core. Feb 13 20:06:36.696994 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:06:36.826989 sshd[5641]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:36.831286 systemd[1]: sshd@22-10.0.0.159:22-10.0.0.1:37682.service: Deactivated successfully. Feb 13 20:06:36.833422 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:06:36.834054 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:06:36.834847 systemd-logind[1462]: Removed session 23. Feb 13 20:06:41.842919 systemd[1]: Started sshd@23-10.0.0.159:22-10.0.0.1:37692.service - OpenSSH per-connection server daemon (10.0.0.1:37692). Feb 13 20:06:41.883077 sshd[5659]: Accepted publickey for core from 10.0.0.1 port 37692 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:41.884822 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:41.888817 systemd-logind[1462]: New session 24 of user core. Feb 13 20:06:41.897937 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:06:42.001388 sshd[5659]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:42.005718 systemd[1]: sshd@23-10.0.0.159:22-10.0.0.1:37692.service: Deactivated successfully. Feb 13 20:06:42.007896 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:06:42.008567 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:06:42.009378 systemd-logind[1462]: Removed session 24. Feb 13 20:06:47.014143 systemd[1]: Started sshd@24-10.0.0.159:22-10.0.0.1:60820.service - OpenSSH per-connection server daemon (10.0.0.1:60820). Feb 13 20:06:47.015281 kubelet[2525]: E0213 20:06:47.015247 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:47.027688 kubelet[2525]: E0213 20:06:47.027117 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:47.093587 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 60820 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:47.095350 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:47.099223 systemd-logind[1462]: New session 25 of user core. Feb 13 20:06:47.112914 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:06:47.227894 sshd[5702]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:47.232062 systemd[1]: sshd@24-10.0.0.159:22-10.0.0.1:60820.service: Deactivated successfully. Feb 13 20:06:47.234153 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:06:47.234752 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:06:47.235641 systemd-logind[1462]: Removed session 25. Feb 13 20:06:49.027446 kubelet[2525]: E0213 20:06:49.027404 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:52.027544 kubelet[2525]: E0213 20:06:52.027497 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:52.239007 systemd[1]: Started sshd@25-10.0.0.159:22-10.0.0.1:60828.service - OpenSSH per-connection server daemon (10.0.0.1:60828). Feb 13 20:06:52.279016 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 60828 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:52.280541 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:52.284548 systemd-logind[1462]: New session 26 of user core. Feb 13 20:06:52.289952 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:06:52.408996 sshd[5736]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:52.413532 systemd[1]: sshd@25-10.0.0.159:22-10.0.0.1:60828.service: Deactivated successfully. Feb 13 20:06:52.415626 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:06:52.416326 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:06:52.417191 systemd-logind[1462]: Removed session 26. Feb 13 20:06:57.422879 systemd[1]: Started sshd@26-10.0.0.159:22-10.0.0.1:46210.service - OpenSSH per-connection server daemon (10.0.0.1:46210). Feb 13 20:06:57.465365 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 46210 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 20:06:57.467106 sshd[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:57.471053 systemd-logind[1462]: New session 27 of user core. Feb 13 20:06:57.478904 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:06:57.589975 sshd[5751]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:57.594083 systemd[1]: sshd@26-10.0.0.159:22-10.0.0.1:46210.service: Deactivated successfully. Feb 13 20:06:57.596109 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:06:57.596711 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:06:57.597556 systemd-logind[1462]: Removed session 27.