Feb 13 19:53:03.875916 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 19:53:03.875936 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:53:03.875947 kernel: BIOS-provided physical RAM map: Feb 13 19:53:03.875953 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:53:03.875959 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:53:03.875965 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:53:03.875972 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:53:03.875978 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:53:03.875984 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 13 19:53:03.875990 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 13 19:53:03.875999 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 13 19:53:03.876005 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Feb 13 19:53:03.876011 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Feb 13 19:53:03.876018 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Feb 13 19:53:03.876025 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 13 19:53:03.876032 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:53:03.876041 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 13 19:53:03.876047 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 13 19:53:03.876054 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:53:03.876060 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:53:03.876067 kernel: NX (Execute Disable) protection: active Feb 13 19:53:03.876073 kernel: APIC: Static calls initialized Feb 13 19:53:03.876080 kernel: efi: EFI v2.7 by EDK II Feb 13 19:53:03.876086 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Feb 13 19:53:03.876093 kernel: SMBIOS 2.8 present. Feb 13 19:53:03.876100 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Feb 13 19:53:03.876106 kernel: Hypervisor detected: KVM Feb 13 19:53:03.876115 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:53:03.876122 kernel: kvm-clock: using sched offset of 3916532309 cycles Feb 13 19:53:03.876129 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:53:03.876136 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:53:03.876143 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:53:03.876150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:53:03.876157 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 13 19:53:03.876163 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:53:03.876177 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:53:03.876186 kernel: Using GB pages for direct mapping Feb 13 19:53:03.876193 kernel: Secure boot disabled Feb 13 19:53:03.876199 kernel: ACPI: Early table checksum verification disabled Feb 13 19:53:03.876206 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:53:03.876216 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:53:03.876223 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:53:03.876231 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:53:03.876251 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:53:03.876258 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:53:03.876265 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:53:03.876272 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:53:03.876279 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:53:03.876286 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:53:03.876293 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:53:03.876303 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:53:03.876310 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:53:03.876317 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:53:03.876324 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:53:03.876331 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:53:03.876338 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:53:03.876345 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:53:03.876352 kernel: No NUMA configuration found Feb 13 19:53:03.876359 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 13 19:53:03.876368 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 13 19:53:03.876375 kernel: Zone ranges: Feb 13 19:53:03.876382 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:53:03.876389 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 13 19:53:03.876396 kernel: Normal empty Feb 13 19:53:03.876403 kernel: Movable zone start for each node Feb 13 19:53:03.876410 kernel: Early memory node ranges Feb 13 19:53:03.876417 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:53:03.876424 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:53:03.876431 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:53:03.876440 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 13 19:53:03.876447 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 13 19:53:03.876454 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 13 19:53:03.876461 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 13 19:53:03.876468 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:53:03.876475 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:53:03.876482 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:53:03.876489 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:53:03.876496 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 13 19:53:03.876505 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 13 19:53:03.876512 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 13 19:53:03.876519 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:53:03.876526 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:53:03.876533 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:53:03.876540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:53:03.876547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:53:03.876554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:53:03.876561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:53:03.876568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:53:03.876578 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:53:03.876585 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:53:03.876592 kernel: TSC deadline timer available Feb 13 19:53:03.876599 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:53:03.876606 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:53:03.876613 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:53:03.876620 kernel: kvm-guest: setup PV sched yield Feb 13 19:53:03.876627 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 19:53:03.876633 kernel: Booting paravirtualized kernel on KVM Feb 13 19:53:03.876643 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:53:03.876650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:53:03.876657 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:53:03.876664 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:53:03.876671 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:53:03.876678 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:53:03.876686 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:53:03.876706 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:53:03.876718 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:53:03.876725 kernel: random: crng init done Feb 13 19:53:03.876739 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:53:03.876754 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:53:03.876774 kernel: Fallback order for Node 0: 0 Feb 13 19:53:03.876782 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 13 19:53:03.876797 kernel: Policy zone: DMA32 Feb 13 19:53:03.876811 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:53:03.876826 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 171124K reserved, 0K cma-reserved) Feb 13 19:53:03.876844 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:53:03.876864 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 19:53:03.876871 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:53:03.876892 kernel: Dynamic Preempt: voluntary Feb 13 19:53:03.876907 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:53:03.876917 kernel: rcu: RCU event tracing is enabled. Feb 13 19:53:03.876924 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:53:03.876932 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:53:03.876939 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:53:03.876947 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:53:03.876954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:53:03.876961 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:53:03.876971 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:53:03.876978 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:53:03.876986 kernel: Console: colour dummy device 80x25 Feb 13 19:53:03.876993 kernel: printk: console [ttyS0] enabled Feb 13 19:53:03.877000 kernel: ACPI: Core revision 20230628 Feb 13 19:53:03.877010 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:53:03.877017 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:53:03.877024 kernel: x2apic enabled Feb 13 19:53:03.877032 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:53:03.877039 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:53:03.877047 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:53:03.877054 kernel: kvm-guest: setup PV IPIs Feb 13 19:53:03.877061 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:53:03.877068 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:53:03.877078 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:53:03.877086 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:53:03.877093 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:53:03.877100 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:53:03.877108 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:53:03.877115 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:53:03.877123 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:53:03.877130 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:53:03.877137 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:53:03.877147 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:53:03.877154 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:53:03.877162 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:53:03.877175 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:53:03.877183 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:53:03.877190 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:53:03.877198 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:53:03.877205 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:53:03.877214 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:53:03.877222 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:53:03.877229 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:53:03.877237 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:53:03.877352 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:53:03.877360 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:53:03.877367 kernel: landlock: Up and running. Feb 13 19:53:03.877374 kernel: SELinux: Initializing. Feb 13 19:53:03.877382 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:53:03.877392 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:53:03.877399 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:53:03.877407 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:53:03.877414 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:53:03.877422 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:53:03.877430 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:53:03.877437 kernel: ... version: 0 Feb 13 19:53:03.877444 kernel: ... bit width: 48 Feb 13 19:53:03.877452 kernel: ... generic registers: 6 Feb 13 19:53:03.877461 kernel: ... value mask: 0000ffffffffffff Feb 13 19:53:03.877468 kernel: ... max period: 00007fffffffffff Feb 13 19:53:03.877476 kernel: ... fixed-purpose events: 0 Feb 13 19:53:03.877483 kernel: ... event mask: 000000000000003f Feb 13 19:53:03.877490 kernel: signal: max sigframe size: 1776 Feb 13 19:53:03.877498 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:53:03.877505 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:53:03.877513 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:53:03.877520 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:53:03.877530 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:53:03.877537 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:53:03.877544 kernel: smpboot: Max logical packages: 1 Feb 13 19:53:03.877552 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:53:03.877559 kernel: devtmpfs: initialized Feb 13 19:53:03.877567 kernel: x86/mm: Memory block size: 128MB Feb 13 19:53:03.877574 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:53:03.877581 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:53:03.877589 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 13 19:53:03.877599 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:53:03.877606 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:53:03.877614 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:53:03.877621 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:53:03.877629 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:53:03.877636 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:53:03.877643 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:53:03.877651 kernel: audit: type=2000 audit(1739476383.576:1): state=initialized audit_enabled=0 res=1 Feb 13 19:53:03.877658 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:53:03.877667 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:53:03.877675 kernel: cpuidle: using governor menu Feb 13 19:53:03.877682 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:53:03.877689 kernel: dca service started, version 1.12.1 Feb 13 19:53:03.877697 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:53:03.877704 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:53:03.877712 kernel: PCI: Using configuration type 1 for base access Feb 13 19:53:03.877719 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:53:03.877727 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:53:03.877736 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:53:03.877744 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:53:03.877751 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:53:03.877758 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:53:03.877766 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:53:03.877773 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:53:03.877780 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:53:03.877788 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:53:03.877795 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:53:03.877804 kernel: ACPI: Interpreter enabled Feb 13 19:53:03.877812 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:53:03.877819 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:53:03.877827 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:53:03.877834 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:53:03.877841 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:53:03.877849 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:53:03.878068 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:53:03.878260 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:53:03.878414 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:53:03.878432 kernel: PCI host bridge to bus 0000:00 Feb 13 19:53:03.878559 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:53:03.878671 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:53:03.878780 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:53:03.878888 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:53:03.879001 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:53:03.879110 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Feb 13 19:53:03.879229 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:53:03.879382 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:53:03.879538 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:53:03.879670 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:53:03.879800 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:53:03.879917 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:53:03.880035 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:53:03.880152 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:53:03.880308 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:53:03.880428 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:53:03.880547 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:53:03.880708 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 13 19:53:03.880845 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:53:03.880992 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:53:03.881112 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:53:03.881264 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 13 19:53:03.881397 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:53:03.881518 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:53:03.881641 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:53:03.881788 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 13 19:53:03.881943 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:53:03.882074 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:53:03.882204 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:53:03.882349 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:53:03.882472 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:53:03.882598 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:53:03.882839 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:53:03.882961 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:53:03.882971 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:53:03.882979 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:53:03.882986 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:53:03.882994 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:53:03.883005 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:53:03.883013 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:53:03.883020 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:53:03.883027 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:53:03.883035 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:53:03.883042 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:53:03.883049 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:53:03.883057 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:53:03.883064 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:53:03.883074 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:53:03.883081 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:53:03.883088 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:53:03.883096 kernel: iommu: Default domain type: Translated Feb 13 19:53:03.883103 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:53:03.883110 kernel: efivars: Registered efivars operations Feb 13 19:53:03.883118 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:53:03.883125 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:53:03.883132 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:53:03.883142 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 13 19:53:03.883149 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 13 19:53:03.883156 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 13 19:53:03.883302 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:53:03.883422 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:53:03.883540 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:53:03.883550 kernel: vgaarb: loaded Feb 13 19:53:03.883558 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:53:03.883565 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:53:03.883576 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:53:03.883584 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:53:03.883591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:53:03.883599 kernel: pnp: PnP ACPI init Feb 13 19:53:03.883732 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:53:03.883743 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:53:03.883750 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:53:03.883758 kernel: NET: Registered PF_INET protocol family Feb 13 19:53:03.883768 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:53:03.883776 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:53:03.883783 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:53:03.883791 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:53:03.883798 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:53:03.883805 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:53:03.883813 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:53:03.883820 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:53:03.883828 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:53:03.883837 kernel: NET: Registered PF_XDP protocol family Feb 13 19:53:03.883958 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:53:03.884078 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:53:03.884198 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:53:03.884341 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:53:03.884453 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:53:03.884563 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:53:03.884672 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:53:03.884788 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Feb 13 19:53:03.884798 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:53:03.884806 kernel: Initialise system trusted keyrings Feb 13 19:53:03.884813 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:53:03.884821 kernel: Key type asymmetric registered Feb 13 19:53:03.884828 kernel: Asymmetric key parser 'x509' registered Feb 13 19:53:03.884835 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:53:03.884843 kernel: io scheduler mq-deadline registered Feb 13 19:53:03.884850 kernel: io scheduler kyber registered Feb 13 19:53:03.884861 kernel: io scheduler bfq registered Feb 13 19:53:03.884868 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:53:03.884876 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:53:03.884884 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:53:03.884891 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:53:03.884898 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:53:03.884906 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:53:03.884914 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:53:03.884921 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:53:03.884931 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:53:03.885065 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:53:03.885076 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:53:03.885195 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:53:03.885428 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:53:03 UTC (1739476383) Feb 13 19:53:03.885541 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:53:03.885551 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:53:03.885562 kernel: efifb: probing for efifb Feb 13 19:53:03.885570 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Feb 13 19:53:03.885578 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Feb 13 19:53:03.885585 kernel: efifb: scrolling: redraw Feb 13 19:53:03.885592 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Feb 13 19:53:03.885600 kernel: Console: switching to colour frame buffer device 100x37 Feb 13 19:53:03.885624 kernel: fb0: EFI VGA frame buffer device Feb 13 19:53:03.885634 kernel: pstore: Using crash dump compression: deflate Feb 13 19:53:03.885641 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:53:03.885651 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:53:03.885659 kernel: Segment Routing with IPv6 Feb 13 19:53:03.885667 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:53:03.885674 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:53:03.885682 kernel: Key type dns_resolver registered Feb 13 19:53:03.885690 kernel: IPI shorthand broadcast: enabled Feb 13 19:53:03.885697 kernel: sched_clock: Marking stable (546002823, 112191862)->(707163714, -48969029) Feb 13 19:53:03.885705 kernel: registered taskstats version 1 Feb 13 19:53:03.885713 kernel: Loading compiled-in X.509 certificates Feb 13 19:53:03.885721 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 19:53:03.885731 kernel: Key type .fscrypt registered Feb 13 19:53:03.885739 kernel: Key type fscrypt-provisioning registered Feb 13 19:53:03.885746 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:53:03.885754 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:53:03.885762 kernel: ima: No architecture policies found Feb 13 19:53:03.885769 kernel: clk: Disabling unused clocks Feb 13 19:53:03.885777 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 19:53:03.885785 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:53:03.885795 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 19:53:03.885802 kernel: Run /init as init process Feb 13 19:53:03.885810 kernel: with arguments: Feb 13 19:53:03.885818 kernel: /init Feb 13 19:53:03.885825 kernel: with environment: Feb 13 19:53:03.885833 kernel: HOME=/ Feb 13 19:53:03.885840 kernel: TERM=linux Feb 13 19:53:03.885848 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:53:03.885858 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:53:03.885870 systemd[1]: Detected virtualization kvm. Feb 13 19:53:03.885878 systemd[1]: Detected architecture x86-64. Feb 13 19:53:03.885886 systemd[1]: Running in initrd. Feb 13 19:53:03.885896 systemd[1]: No hostname configured, using default hostname. Feb 13 19:53:03.885906 systemd[1]: Hostname set to . Feb 13 19:53:03.885915 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:53:03.885923 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:53:03.885931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:53:03.885942 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:53:03.885950 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:53:03.885959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:53:03.885967 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:53:03.885978 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:53:03.885988 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:53:03.885996 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:53:03.886005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:53:03.886013 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:53:03.886021 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:53:03.886030 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:53:03.886040 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:53:03.886048 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:53:03.886056 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:53:03.886065 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:53:03.886073 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:53:03.886081 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:53:03.886089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:53:03.886098 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:53:03.886106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:53:03.886116 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:53:03.886124 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:53:03.886132 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:53:03.886141 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:53:03.886149 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:53:03.886157 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:53:03.886165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:53:03.886181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:53:03.886192 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:53:03.886200 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:53:03.886209 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:53:03.886217 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:53:03.886254 systemd-journald[191]: Collecting audit messages is disabled. Feb 13 19:53:03.886275 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:53:03.886284 systemd-journald[191]: Journal started Feb 13 19:53:03.886304 systemd-journald[191]: Runtime Journal (/run/log/journal/20ef854a33404f119dd3dd9f27d8e52a) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:53:03.878477 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 19:53:03.894334 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:53:03.897720 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:53:03.897697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:53:03.901020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:53:03.907264 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:53:03.909073 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 19:53:03.910011 kernel: Bridge firewalling registered Feb 13 19:53:03.910395 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:53:03.913444 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:53:03.915934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:53:03.919390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:53:03.927121 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:53:03.928999 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:53:03.930281 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:53:03.939317 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:53:03.943394 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:53:03.945561 dracut-cmdline[225]: dracut-dracut-053 Feb 13 19:53:03.948560 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:53:03.978801 systemd-resolved[232]: Positive Trust Anchors: Feb 13 19:53:03.978816 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:53:03.978847 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:53:03.981305 systemd-resolved[232]: Defaulting to hostname 'linux'. Feb 13 19:53:03.982329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:53:03.988336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:53:04.032273 kernel: SCSI subsystem initialized Feb 13 19:53:04.041267 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:53:04.051274 kernel: iscsi: registered transport (tcp) Feb 13 19:53:04.072452 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:53:04.072491 kernel: QLogic iSCSI HBA Driver Feb 13 19:53:04.122984 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:53:04.134365 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:53:04.157604 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:53:04.157629 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:53:04.158619 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:53:04.199270 kernel: raid6: avx2x4 gen() 30562 MB/s Feb 13 19:53:04.216263 kernel: raid6: avx2x2 gen() 31000 MB/s Feb 13 19:53:04.233331 kernel: raid6: avx2x1 gen() 25879 MB/s Feb 13 19:53:04.233349 kernel: raid6: using algorithm avx2x2 gen() 31000 MB/s Feb 13 19:53:04.251335 kernel: raid6: .... xor() 19995 MB/s, rmw enabled Feb 13 19:53:04.251358 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:53:04.271264 kernel: xor: automatically using best checksumming function avx Feb 13 19:53:04.424267 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:53:04.437346 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:53:04.449372 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:53:04.462033 systemd-udevd[412]: Using default interface naming scheme 'v255'. Feb 13 19:53:04.466292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:53:04.476411 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:53:04.490145 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Feb 13 19:53:04.522620 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:53:04.536372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:53:04.599477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:53:04.612328 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:53:04.621679 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:53:04.624528 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:53:04.627054 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:53:04.627662 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:53:04.637426 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:53:04.664602 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:53:04.664618 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:53:04.664803 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:53:04.664815 kernel: GPT:9289727 != 19775487 Feb 13 19:53:04.664825 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:53:04.664842 kernel: GPT:9289727 != 19775487 Feb 13 19:53:04.664851 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:53:04.664861 kernel: libata version 3.00 loaded. Feb 13 19:53:04.664872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:53:04.664882 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:53:04.638561 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:53:04.670135 kernel: AES CTR mode by8 optimization enabled Feb 13 19:53:04.670159 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:53:04.708139 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:53:04.708170 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:53:04.708337 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:53:04.708476 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (470) Feb 13 19:53:04.708488 kernel: scsi host0: ahci Feb 13 19:53:04.708640 kernel: scsi host1: ahci Feb 13 19:53:04.708784 kernel: scsi host2: ahci Feb 13 19:53:04.708928 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Feb 13 19:53:04.708939 kernel: scsi host3: ahci Feb 13 19:53:04.709080 kernel: scsi host4: ahci Feb 13 19:53:04.709236 kernel: scsi host5: ahci Feb 13 19:53:04.709405 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:53:04.709416 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:53:04.709426 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:53:04.709440 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:53:04.709451 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:53:04.709460 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:53:04.646903 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:53:04.662722 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:53:04.662926 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:53:04.665212 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:53:04.667280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:53:04.667452 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:53:04.669957 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:53:04.682221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:53:04.698186 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:53:04.704441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:53:04.714293 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:53:04.718427 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:53:04.719052 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:53:04.729987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:53:04.738380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:53:04.740347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:53:04.748271 disk-uuid[555]: Primary Header is updated. Feb 13 19:53:04.748271 disk-uuid[555]: Secondary Entries is updated. Feb 13 19:53:04.748271 disk-uuid[555]: Secondary Header is updated. Feb 13 19:53:04.752263 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:53:04.756273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:53:04.757759 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:53:05.014270 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:53:05.014328 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:53:05.014339 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:53:05.022275 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:53:05.022335 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:53:05.023272 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:53:05.023283 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:53:05.024692 kernel: ata3.00: applying bridge limits Feb 13 19:53:05.024704 kernel: ata3.00: configured for UDMA/100 Feb 13 19:53:05.025277 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:53:05.071273 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:53:05.084865 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:53:05.084889 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:53:05.757267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:53:05.757319 disk-uuid[559]: The operation has completed successfully. Feb 13 19:53:05.787639 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:53:05.787772 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:53:05.811378 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:53:05.816168 sh[592]: Success Feb 13 19:53:05.829303 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:53:05.859572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:53:05.872588 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:53:05.875506 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:53:05.887180 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 19:53:05.887214 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:53:05.887225 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:53:05.887235 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:53:05.887903 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:53:05.891902 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:53:05.892726 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:53:05.893482 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:53:05.895023 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:53:05.907552 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:53:05.907575 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:53:05.907586 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:53:05.910267 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:53:05.918661 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:53:05.920354 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:53:05.928706 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:53:05.935398 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:53:05.984868 ignition[692]: Ignition 2.19.0 Feb 13 19:53:05.984880 ignition[692]: Stage: fetch-offline Feb 13 19:53:05.984915 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:53:05.984926 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:53:05.985016 ignition[692]: parsed url from cmdline: "" Feb 13 19:53:05.985020 ignition[692]: no config URL provided Feb 13 19:53:05.985026 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:53:05.985035 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:53:05.985061 ignition[692]: op(1): [started] loading QEMU firmware config module Feb 13 19:53:05.985066 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:53:05.991472 ignition[692]: op(1): [finished] loading QEMU firmware config module Feb 13 19:53:06.000858 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:53:06.012367 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:53:06.033554 systemd-networkd[780]: lo: Link UP Feb 13 19:53:06.033563 systemd-networkd[780]: lo: Gained carrier Feb 13 19:53:06.035098 systemd-networkd[780]: Enumeration completed Feb 13 19:53:06.035406 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:53:06.035668 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:53:06.035671 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:53:06.036645 systemd-networkd[780]: eth0: Link UP Feb 13 19:53:06.042286 ignition[692]: parsing config with SHA512: 81e5a7212943b42f02bc0bdfa6fb5449702ca6d3ff7b5838b720382bf0dcb0dc290d91d712107bace776f848735886a95b565a8884a6e16a62faa429de45f3fa Feb 13 19:53:06.036649 systemd-networkd[780]: eth0: Gained carrier Feb 13 19:53:06.036656 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:53:06.037331 systemd[1]: Reached target network.target - Network. Feb 13 19:53:06.046023 ignition[692]: fetch-offline: fetch-offline passed Feb 13 19:53:06.045617 unknown[692]: fetched base config from "system" Feb 13 19:53:06.046081 ignition[692]: Ignition finished successfully Feb 13 19:53:06.045625 unknown[692]: fetched user config from "qemu" Feb 13 19:53:06.048950 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:53:06.051035 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:53:06.060292 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:53:06.060382 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:53:06.072580 ignition[783]: Ignition 2.19.0 Feb 13 19:53:06.072590 ignition[783]: Stage: kargs Feb 13 19:53:06.072754 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:53:06.072765 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:53:06.073545 ignition[783]: kargs: kargs passed Feb 13 19:53:06.073585 ignition[783]: Ignition finished successfully Feb 13 19:53:06.080085 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:53:06.087362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:53:06.098957 ignition[794]: Ignition 2.19.0 Feb 13 19:53:06.098967 ignition[794]: Stage: disks Feb 13 19:53:06.099130 ignition[794]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:53:06.102428 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:53:06.099141 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:53:06.103751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:53:06.099882 ignition[794]: disks: disks passed Feb 13 19:53:06.105656 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:53:06.099925 ignition[794]: Ignition finished successfully Feb 13 19:53:06.107864 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:53:06.108473 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:53:06.108762 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:53:06.120357 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:53:06.131554 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:53:06.137578 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:53:06.150308 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:53:06.231271 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 19:53:06.231823 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:53:06.232954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:53:06.246338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:53:06.247942 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:53:06.249146 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:53:06.254404 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Feb 13 19:53:06.249182 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:53:06.249202 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:53:06.263347 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:53:06.263367 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:53:06.263378 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:53:06.263393 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:53:06.255704 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:53:06.258001 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:53:06.265037 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:53:06.293165 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:53:06.297751 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:53:06.301549 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:53:06.305294 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:53:06.387522 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:53:06.398403 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:53:06.399371 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:53:06.409267 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:53:06.421429 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:53:06.430223 ignition[929]: INFO : Ignition 2.19.0 Feb 13 19:53:06.430223 ignition[929]: INFO : Stage: mount Feb 13 19:53:06.432111 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:53:06.432111 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:53:06.432111 ignition[929]: INFO : mount: mount passed Feb 13 19:53:06.432111 ignition[929]: INFO : Ignition finished successfully Feb 13 19:53:06.438392 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:53:06.455320 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:53:06.885434 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:53:06.901364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:53:06.908233 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Feb 13 19:53:06.908271 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:53:06.908282 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:53:06.909698 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:53:06.912262 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:53:06.913227 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:53:06.941071 ignition[960]: INFO : Ignition 2.19.0 Feb 13 19:53:06.941071 ignition[960]: INFO : Stage: files Feb 13 19:53:06.942760 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:53:06.942760 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:53:06.942760 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:53:06.942760 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:53:06.942760 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:53:06.949140 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:53:06.949140 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:53:06.949140 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:53:06.949140 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:53:06.949140 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:53:06.945029 unknown[960]: wrote ssh authorized keys file for user: core Feb 13 19:53:06.988646 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:53:07.096491 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:53:07.098457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:53:07.605128 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:53:07.839381 systemd-networkd[780]: eth0: Gained IPv6LL Feb 13 19:53:07.882539 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:53:07.882539 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:53:07.886290 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:53:07.888429 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:53:07.888429 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:53:07.888429 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:53:07.892686 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:53:07.894577 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:53:07.894577 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:53:07.897663 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:53:07.917358 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:53:07.923819 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:53:07.925435 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:53:07.925435 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:53:07.928174 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:53:07.929613 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:53:07.931373 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:53:07.933024 ignition[960]: INFO : files: files passed Feb 13 19:53:07.933800 ignition[960]: INFO : Ignition finished successfully Feb 13 19:53:07.936827 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:53:07.944448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:53:07.945456 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:53:07.952640 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:53:07.952764 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:53:07.957853 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:53:07.961445 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:53:07.963103 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:53:07.964650 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:53:07.968705 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:53:07.969343 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:53:07.981435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:53:08.004585 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:53:08.004716 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:53:08.007021 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:53:08.009030 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:53:08.011040 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:53:08.024365 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:53:08.038843 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:53:08.045468 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:53:08.054049 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:53:08.055325 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:53:08.057507 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:53:08.059489 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:53:08.059602 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:53:08.061733 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:53:08.063440 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:53:08.065446 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:53:08.067460 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:53:08.069436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:53:08.071585 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:53:08.073670 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:53:08.075915 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:53:08.077972 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:53:08.080082 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:53:08.081808 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:53:08.081929 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:53:08.084022 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:53:08.085608 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:53:08.087657 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:53:08.087780 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:53:08.089839 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:53:08.089947 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:53:08.092125 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:53:08.092233 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:53:08.094214 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:53:08.095909 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:53:08.100309 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:53:08.102614 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:53:08.104266 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:53:08.106189 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:53:08.106295 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:53:08.108578 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:53:08.108674 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:53:08.110420 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:53:08.110532 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:53:08.112468 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:53:08.112570 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:53:08.125375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:53:08.127624 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:53:08.129398 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:53:08.130523 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:53:08.132929 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:53:08.134022 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:53:08.138583 ignition[1016]: INFO : Ignition 2.19.0 Feb 13 19:53:08.138583 ignition[1016]: INFO : Stage: umount Feb 13 19:53:08.140345 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:53:08.140345 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:53:08.140345 ignition[1016]: INFO : umount: umount passed Feb 13 19:53:08.140345 ignition[1016]: INFO : Ignition finished successfully Feb 13 19:53:08.142695 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:53:08.142827 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:53:08.144981 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:53:08.145101 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:53:08.147260 systemd[1]: Stopped target network.target - Network. Feb 13 19:53:08.148360 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:53:08.148425 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:53:08.150689 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:53:08.150740 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:53:08.152519 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:53:08.152567 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:53:08.154627 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:53:08.154676 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:53:08.156730 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:53:08.158700 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:53:08.161884 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:53:08.167716 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:53:08.167852 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:53:08.169291 systemd-networkd[780]: eth0: DHCPv6 lease lost Feb 13 19:53:08.171130 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:53:08.171206 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:53:08.173306 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:53:08.173436 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:53:08.175954 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:53:08.176027 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:53:08.185315 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:53:08.187228 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:53:08.187297 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:53:08.188629 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:53:08.188676 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:53:08.190672 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:53:08.190718 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:53:08.193072 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:53:08.202114 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:53:08.202260 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:53:08.211015 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:53:08.211216 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:53:08.213391 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:53:08.213442 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:53:08.215454 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:53:08.215493 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:53:08.217415 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:53:08.217464 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:53:08.219687 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:53:08.219734 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:53:08.221647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:53:08.221695 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:53:08.232443 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:53:08.234641 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:53:08.234711 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:53:08.236937 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:53:08.236986 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:53:08.239384 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:53:08.239433 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:53:08.241667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:53:08.241715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:53:08.244616 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:53:08.244732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:53:08.328452 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:53:08.328599 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:53:08.329281 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:53:08.331593 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:53:08.331644 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:53:08.349371 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:53:08.357652 systemd[1]: Switching root. Feb 13 19:53:08.392690 systemd-journald[191]: Journal stopped Feb 13 19:53:09.547974 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Feb 13 19:53:09.548056 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:53:09.548070 kernel: SELinux: policy capability open_perms=1 Feb 13 19:53:09.548081 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:53:09.548096 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:53:09.548112 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:53:09.548123 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:53:09.548135 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:53:09.548147 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:53:09.548158 kernel: audit: type=1403 audit(1739476388.823:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:53:09.548176 systemd[1]: Successfully loaded SELinux policy in 38.282ms. Feb 13 19:53:09.548202 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.020ms. Feb 13 19:53:09.548215 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:53:09.548230 systemd[1]: Detected virtualization kvm. Feb 13 19:53:09.548253 systemd[1]: Detected architecture x86-64. Feb 13 19:53:09.548265 systemd[1]: Detected first boot. Feb 13 19:53:09.548276 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:53:09.548289 zram_generator::config[1060]: No configuration found. Feb 13 19:53:09.548301 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:53:09.548313 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:53:09.548325 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:53:09.548339 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:53:09.548358 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:53:09.548370 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:53:09.548382 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:53:09.548393 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:53:09.548405 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:53:09.548417 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:53:09.548429 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:53:09.548442 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:53:09.548456 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:53:09.548468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:53:09.548480 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:53:09.548492 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:53:09.548504 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:53:09.548516 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:53:09.548528 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:53:09.548539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:53:09.548551 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:53:09.548565 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:53:09.548578 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:53:09.548589 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:53:09.548601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:53:09.548613 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:53:09.548625 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:53:09.548637 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:53:09.548648 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:53:09.548663 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:53:09.548675 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:53:09.548687 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:53:09.548700 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:53:09.548712 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:53:09.548723 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:53:09.548735 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:53:09.548747 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:53:09.548759 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:53:09.548773 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:53:09.548785 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:53:09.548796 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:53:09.548809 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:53:09.548820 systemd[1]: Reached target machines.target - Containers. Feb 13 19:53:09.548832 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:53:09.548844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:53:09.548856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:53:09.548870 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:53:09.548882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:53:09.548894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:53:09.548906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:53:09.548918 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:53:09.548929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:53:09.548942 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:53:09.548955 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:53:09.548969 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:53:09.548981 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:53:09.548994 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:53:09.549005 kernel: loop: module loaded Feb 13 19:53:09.549025 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:53:09.549037 kernel: fuse: init (API version 7.39) Feb 13 19:53:09.549049 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:53:09.549061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:53:09.549079 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:53:09.549100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:53:09.549112 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:53:09.549123 systemd[1]: Stopped verity-setup.service. Feb 13 19:53:09.549136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:53:09.549148 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:53:09.549160 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:53:09.549189 systemd-journald[1130]: Collecting audit messages is disabled. Feb 13 19:53:09.549214 kernel: ACPI: bus type drm_connector registered Feb 13 19:53:09.549226 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:53:09.549250 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:53:09.549263 systemd-journald[1130]: Journal started Feb 13 19:53:09.549289 systemd-journald[1130]: Runtime Journal (/run/log/journal/20ef854a33404f119dd3dd9f27d8e52a) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:53:09.329199 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:53:09.345758 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:53:09.346217 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:53:09.552304 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:53:09.553315 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:53:09.554544 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:53:09.555776 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:53:09.557227 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:53:09.558773 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:53:09.558948 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:53:09.560484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:53:09.560652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:53:09.562084 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:53:09.562283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:53:09.563627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:53:09.563793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:53:09.565642 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:53:09.565895 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:53:09.567439 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:53:09.567642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:53:09.569105 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:53:09.570589 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:53:09.572232 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:53:09.591493 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:53:09.599321 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:53:09.601570 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:53:09.602707 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:53:09.602740 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:53:09.604883 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:53:09.607385 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:53:09.611490 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:53:09.612775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:53:09.614336 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:53:09.618584 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:53:09.620328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:53:09.622438 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:53:09.623645 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:53:09.630389 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:53:09.641315 systemd-journald[1130]: Time spent on flushing to /var/log/journal/20ef854a33404f119dd3dd9f27d8e52a is 23.528ms for 990 entries. Feb 13 19:53:09.641315 systemd-journald[1130]: System Journal (/var/log/journal/20ef854a33404f119dd3dd9f27d8e52a) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:53:09.693854 systemd-journald[1130]: Received client request to flush runtime journal. Feb 13 19:53:09.693909 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 19:53:09.693940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:53:09.635337 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:53:09.638744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:53:09.642829 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:53:09.644372 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:53:09.648284 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:53:09.649861 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:53:09.657617 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:53:09.666402 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:53:09.668749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:53:09.675054 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:53:09.681344 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:53:09.692904 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:53:09.695552 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 19:53:09.695566 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 19:53:09.697827 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:53:09.701425 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:53:09.710407 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:53:09.712814 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:53:09.713736 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:53:09.722296 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 19:53:09.738761 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:53:09.746413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:53:09.750351 kernel: loop2: detected capacity change from 0 to 140768 Feb 13 19:53:09.764684 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Feb 13 19:53:09.765069 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Feb 13 19:53:09.771142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:53:09.798275 kernel: loop3: detected capacity change from 0 to 142488 Feb 13 19:53:09.811281 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 19:53:09.817267 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 19:53:09.827269 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:53:09.828112 (sd-merge)[1201]: Merged extensions into '/usr'. Feb 13 19:53:09.832866 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:53:09.832885 systemd[1]: Reloading... Feb 13 19:53:09.901287 zram_generator::config[1230]: No configuration found. Feb 13 19:53:09.962037 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:53:10.015400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:53:10.064042 systemd[1]: Reloading finished in 230 ms. Feb 13 19:53:10.100494 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:53:10.102016 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:53:10.116458 systemd[1]: Starting ensure-sysext.service... Feb 13 19:53:10.118581 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:53:10.126591 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:53:10.126600 systemd[1]: Reloading... Feb 13 19:53:10.144097 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:53:10.144489 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:53:10.145457 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:53:10.145754 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 19:53:10.145837 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 19:53:10.149264 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:53:10.149393 systemd-tmpfiles[1265]: Skipping /boot Feb 13 19:53:10.162421 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:53:10.162438 systemd-tmpfiles[1265]: Skipping /boot Feb 13 19:53:10.183304 zram_generator::config[1300]: No configuration found. Feb 13 19:53:10.271315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:53:10.320336 systemd[1]: Reloading finished in 193 ms. Feb 13 19:53:10.338853 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:53:10.351886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:53:10.360802 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:53:10.363330 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:53:10.365719 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:53:10.370254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:53:10.373847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:53:10.376793 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:53:10.380116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:53:10.380295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:53:10.382490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:53:10.388487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:53:10.394330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:53:10.395616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:53:10.402307 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:53:10.403568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:53:10.404758 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:53:10.406592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:53:10.406761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:53:10.408382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:53:10.408558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:53:10.410334 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:53:10.410501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:53:10.416989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:53:10.417228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:53:10.418503 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Feb 13 19:53:10.421945 augenrules[1359]: No rules Feb 13 19:53:10.423721 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:53:10.425984 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:53:10.429310 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:53:10.436427 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:53:10.436677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:53:10.444644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:53:10.448542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:53:10.451928 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:53:10.453195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:53:10.454342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:53:10.455044 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:53:10.456558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:53:10.459959 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:53:10.461565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:53:10.461791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:53:10.463514 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:53:10.464002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:53:10.465779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:53:10.466501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:53:10.468710 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:53:10.483522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:53:10.483667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:53:10.492410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:53:10.498101 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:53:10.500548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:53:10.507400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:53:10.509539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:53:10.513396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:53:10.514706 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:53:10.514730 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:53:10.516448 systemd[1]: Finished ensure-sysext.service. Feb 13 19:53:10.517748 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:53:10.517917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:53:10.520478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Feb 13 19:53:10.521405 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:53:10.521600 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:53:10.524605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:53:10.524799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:53:10.527059 systemd-resolved[1334]: Positive Trust Anchors: Feb 13 19:53:10.527076 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:53:10.527108 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:53:10.531534 systemd-resolved[1334]: Defaulting to hostname 'linux'. Feb 13 19:53:10.533950 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:53:10.541215 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:53:10.555665 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:53:10.555878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:53:10.561772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:53:10.563381 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:53:10.563508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:53:10.576527 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:53:10.580586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:53:10.587854 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:53:10.600264 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:53:10.602043 systemd-networkd[1403]: lo: Link UP Feb 13 19:53:10.602051 systemd-networkd[1403]: lo: Gained carrier Feb 13 19:53:10.605475 systemd-networkd[1403]: Enumeration completed Feb 13 19:53:10.605591 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:53:10.606315 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:53:10.606327 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:53:10.607095 systemd[1]: Reached target network.target - Network. Feb 13 19:53:10.608326 systemd-networkd[1403]: eth0: Link UP Feb 13 19:53:10.608334 systemd-networkd[1403]: eth0: Gained carrier Feb 13 19:53:10.608359 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:53:10.611398 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:53:10.623377 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:53:10.634861 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:53:10.636437 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:53:10.638057 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:53:10.647664 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:53:10.647941 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:53:10.648105 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:53:10.651574 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:53:10.662527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:53:10.669620 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:53:10.671421 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:53:11.324887 systemd-resolved[1334]: Clock change detected. Flushing caches. Feb 13 19:53:11.325020 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:53:11.325083 systemd-timesyncd[1416]: Initial clock synchronization to Thu 2025-02-13 19:53:11.324838 UTC. Feb 13 19:53:11.328789 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:53:11.360551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:53:11.361209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:53:11.375965 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:53:11.386814 kernel: kvm_amd: TSC scaling supported Feb 13 19:53:11.386889 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:53:11.386902 kernel: kvm_amd: Nested Paging enabled Feb 13 19:53:11.386917 kernel: kvm_amd: LBR virtualization supported Feb 13 19:53:11.387889 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:53:11.387937 kernel: kvm_amd: Virtual GIF supported Feb 13 19:53:11.408801 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:53:11.436828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:53:11.455048 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:53:11.466922 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:53:11.476456 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:53:11.504806 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:53:11.506256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:53:11.507375 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:53:11.508550 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:53:11.509829 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:53:11.511268 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:53:11.512466 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:53:11.513842 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:53:11.515076 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:53:11.515102 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:53:11.516015 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:53:11.517580 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:53:11.520168 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:53:11.531292 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:53:11.533575 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:53:11.535116 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:53:11.536289 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:53:11.537251 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:53:11.538224 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:53:11.538253 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:53:11.539299 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:53:11.541333 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:53:11.543459 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:53:11.545874 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:53:11.550128 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:53:11.554072 jq[1446]: false Feb 13 19:53:11.554402 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:53:11.558041 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:53:11.561094 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:53:11.565925 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:53:11.569928 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:53:11.571882 extend-filesystems[1447]: Found loop3 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found loop4 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found loop5 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found sr0 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found vda Feb 13 19:53:11.571882 extend-filesystems[1447]: Found vda1 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found vda2 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found vda3 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found usr Feb 13 19:53:11.571882 extend-filesystems[1447]: Found vda4 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found vda6 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found vda7 Feb 13 19:53:11.571882 extend-filesystems[1447]: Found vda9 Feb 13 19:53:11.571882 extend-filesystems[1447]: Checking size of /dev/vda9 Feb 13 19:53:11.575457 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:53:11.588731 dbus-daemon[1445]: [system] SELinux support is enabled Feb 13 19:53:11.577507 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:53:11.578012 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:53:11.599966 jq[1462]: true Feb 13 19:53:11.580499 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:53:11.583281 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:53:11.587071 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:53:11.589192 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:53:11.594838 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:53:11.595931 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:53:11.596271 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:53:11.596573 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:53:11.602265 extend-filesystems[1447]: Resized partition /dev/vda9 Feb 13 19:53:11.606475 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:53:11.608380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1372) Feb 13 19:53:11.606783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:53:11.610736 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:53:11.630727 update_engine[1461]: I20250213 19:53:11.630645 1461 main.cc:92] Flatcar Update Engine starting Feb 13 19:53:11.631152 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:53:11.632788 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:53:11.635983 update_engine[1461]: I20250213 19:53:11.635929 1461 update_check_scheduler.cc:74] Next update check in 5m36s Feb 13 19:53:11.640386 tar[1467]: linux-amd64/LICENSE Feb 13 19:53:11.644188 tar[1467]: linux-amd64/helm Feb 13 19:53:11.648265 jq[1471]: true Feb 13 19:53:11.664670 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:53:11.667523 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:53:11.667555 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:53:11.667692 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:53:11.670037 systemd-logind[1456]: New seat seat0. Feb 13 19:53:11.671607 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:53:11.671657 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:53:11.673177 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:53:11.673193 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:53:11.677796 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:53:11.684982 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:53:11.687100 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:53:11.703016 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:53:11.703016 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:53:11.703016 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:53:11.710976 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Feb 13 19:53:11.705087 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:53:11.705313 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:53:11.716276 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:53:11.717406 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:53:11.720337 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:53:11.731188 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:53:11.754453 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:53:11.778186 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:53:11.793157 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:53:11.795502 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:40312.service - OpenSSH per-connection server daemon (10.0.0.1:40312). Feb 13 19:53:11.800086 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:53:11.800427 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:53:11.815077 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:53:11.826850 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:53:11.835126 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:53:11.837588 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:53:11.838956 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:53:11.846658 containerd[1472]: time="2025-02-13T19:53:11.846575271Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:53:11.866975 sshd[1523]: Accepted publickey for core from 10.0.0.1 port 40312 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:11.869446 containerd[1472]: time="2025-02-13T19:53:11.869215660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:53:11.869390 sshd[1523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:11.871072 containerd[1472]: time="2025-02-13T19:53:11.871043477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:53:11.871129 containerd[1472]: time="2025-02-13T19:53:11.871115943Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:53:11.871178 containerd[1472]: time="2025-02-13T19:53:11.871165616Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:53:11.871733 containerd[1472]: time="2025-02-13T19:53:11.871390178Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:53:11.871733 containerd[1472]: time="2025-02-13T19:53:11.871411377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:53:11.871733 containerd[1472]: time="2025-02-13T19:53:11.871476399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:53:11.871733 containerd[1472]: time="2025-02-13T19:53:11.871488803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:53:11.871733 containerd[1472]: time="2025-02-13T19:53:11.871699007Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:53:11.871733 containerd[1472]: time="2025-02-13T19:53:11.871714275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:53:11.871733 containerd[1472]: time="2025-02-13T19:53:11.871728001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:53:11.871733 containerd[1472]: time="2025-02-13T19:53:11.871738421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:53:11.871974 containerd[1472]: time="2025-02-13T19:53:11.871843688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:53:11.872097 containerd[1472]: time="2025-02-13T19:53:11.872074641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:53:11.872224 containerd[1472]: time="2025-02-13T19:53:11.872203543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:53:11.872224 containerd[1472]: time="2025-02-13T19:53:11.872221436Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:53:11.872339 containerd[1472]: time="2025-02-13T19:53:11.872321805Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:53:11.872394 containerd[1472]: time="2025-02-13T19:53:11.872378782Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:53:11.878050 containerd[1472]: time="2025-02-13T19:53:11.878003249Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:53:11.878116 containerd[1472]: time="2025-02-13T19:53:11.878072209Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:53:11.878116 containerd[1472]: time="2025-02-13T19:53:11.878090233Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:53:11.878116 containerd[1472]: time="2025-02-13T19:53:11.878105862Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:53:11.878197 containerd[1472]: time="2025-02-13T19:53:11.878130558Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:53:11.878310 containerd[1472]: time="2025-02-13T19:53:11.878290428Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:53:11.878618 containerd[1472]: time="2025-02-13T19:53:11.878581233Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:53:11.878734 containerd[1472]: time="2025-02-13T19:53:11.878715645Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:53:11.878756 containerd[1472]: time="2025-02-13T19:53:11.878736655Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:53:11.878756 containerd[1472]: time="2025-02-13T19:53:11.878751142Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:53:11.878807 containerd[1472]: time="2025-02-13T19:53:11.878765950Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:53:11.878807 containerd[1472]: time="2025-02-13T19:53:11.878799613Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:53:11.878859 containerd[1472]: time="2025-02-13T19:53:11.878812878Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:53:11.878859 containerd[1472]: time="2025-02-13T19:53:11.878828778Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:53:11.878859 containerd[1472]: time="2025-02-13T19:53:11.878844417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:53:11.878859 containerd[1472]: time="2025-02-13T19:53:11.878857752Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:53:11.878939 containerd[1472]: time="2025-02-13T19:53:11.878871939Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:53:11.878939 containerd[1472]: time="2025-02-13T19:53:11.878884342Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:53:11.878939 containerd[1472]: time="2025-02-13T19:53:11.878906022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.878939 containerd[1472]: time="2025-02-13T19:53:11.878920029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.878939 containerd[1472]: time="2025-02-13T19:53:11.878933384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879035 containerd[1472]: time="2025-02-13T19:53:11.878947330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879035 containerd[1472]: time="2025-02-13T19:53:11.878960955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879035 containerd[1472]: time="2025-02-13T19:53:11.878974611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879035 containerd[1472]: time="2025-02-13T19:53:11.878987104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879035 containerd[1472]: time="2025-02-13T19:53:11.879000840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879035 containerd[1472]: time="2025-02-13T19:53:11.879015618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879035 containerd[1472]: time="2025-02-13T19:53:11.879030666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879165 containerd[1472]: time="2025-02-13T19:53:11.879042909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879165 containerd[1472]: time="2025-02-13T19:53:11.879055793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879165 containerd[1472]: time="2025-02-13T19:53:11.879069679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879165 containerd[1472]: time="2025-02-13T19:53:11.879084567Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:53:11.879165 containerd[1472]: time="2025-02-13T19:53:11.879102651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879165 containerd[1472]: time="2025-02-13T19:53:11.879114554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879165 containerd[1472]: time="2025-02-13T19:53:11.879124943Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:53:11.879286 containerd[1472]: time="2025-02-13T19:53:11.879170649Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:53:11.879286 containerd[1472]: time="2025-02-13T19:53:11.879185216Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:53:11.879286 containerd[1472]: time="2025-02-13T19:53:11.879195195Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:53:11.879286 containerd[1472]: time="2025-02-13T19:53:11.879206967Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:53:11.879286 containerd[1472]: time="2025-02-13T19:53:11.879217326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879286 containerd[1472]: time="2025-02-13T19:53:11.879229609Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:53:11.879286 containerd[1472]: time="2025-02-13T19:53:11.879244758Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:53:11.879286 containerd[1472]: time="2025-02-13T19:53:11.879256390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:53:11.879585 containerd[1472]: time="2025-02-13T19:53:11.879524893Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:53:11.879585 containerd[1472]: time="2025-02-13T19:53:11.879582110Z" level=info msg="Connect containerd service" Feb 13 19:53:11.879732 containerd[1472]: time="2025-02-13T19:53:11.879620743Z" level=info msg="using legacy CRI server" Feb 13 19:53:11.879732 containerd[1472]: time="2025-02-13T19:53:11.879627626Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:53:11.879732 containerd[1472]: time="2025-02-13T19:53:11.879711373Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:53:11.880309 containerd[1472]: time="2025-02-13T19:53:11.880275901Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:53:11.880386 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:53:11.880568 containerd[1472]: time="2025-02-13T19:53:11.880537672Z" level=info msg="Start subscribing containerd event" Feb 13 19:53:11.880648 containerd[1472]: time="2025-02-13T19:53:11.880585692Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:53:11.880788 containerd[1472]: time="2025-02-13T19:53:11.880749670Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:53:11.880825 containerd[1472]: time="2025-02-13T19:53:11.880625717Z" level=info msg="Start recovering state" Feb 13 19:53:11.880872 containerd[1472]: time="2025-02-13T19:53:11.880855759Z" level=info msg="Start event monitor" Feb 13 19:53:11.880894 containerd[1472]: time="2025-02-13T19:53:11.880871288Z" level=info msg="Start snapshots syncer" Feb 13 19:53:11.880894 containerd[1472]: time="2025-02-13T19:53:11.880881778Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:53:11.880894 containerd[1472]: time="2025-02-13T19:53:11.880890885Z" level=info msg="Start streaming server" Feb 13 19:53:11.880969 containerd[1472]: time="2025-02-13T19:53:11.880952360Z" level=info msg="containerd successfully booted in 0.035534s" Feb 13 19:53:11.881574 systemd-logind[1456]: New session 1 of user core. Feb 13 19:53:11.889079 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:53:11.890531 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:53:11.902103 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:53:11.909996 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:53:11.915638 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:53:12.019698 systemd[1538]: Queued start job for default target default.target. Feb 13 19:53:12.035114 systemd[1538]: Created slice app.slice - User Application Slice. Feb 13 19:53:12.035140 systemd[1538]: Reached target paths.target - Paths. Feb 13 19:53:12.035154 systemd[1538]: Reached target timers.target - Timers. Feb 13 19:53:12.036717 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:53:12.048461 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:53:12.048599 systemd[1538]: Reached target sockets.target - Sockets. Feb 13 19:53:12.048613 systemd[1538]: Reached target basic.target - Basic System. Feb 13 19:53:12.048647 systemd[1538]: Reached target default.target - Main User Target. Feb 13 19:53:12.048681 systemd[1538]: Startup finished in 125ms. Feb 13 19:53:12.049319 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:53:12.060898 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:53:12.095913 tar[1467]: linux-amd64/README.md Feb 13 19:53:12.113672 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:53:12.116399 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:40316.service - OpenSSH per-connection server daemon (10.0.0.1:40316). Feb 13 19:53:12.154198 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 40316 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:12.155904 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:12.159838 systemd-logind[1456]: New session 2 of user core. Feb 13 19:53:12.169929 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:53:12.225144 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:12.232686 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:40316.service: Deactivated successfully. Feb 13 19:53:12.234365 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:53:12.235922 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:53:12.251016 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:40322.service - OpenSSH per-connection server daemon (10.0.0.1:40322). Feb 13 19:53:12.253289 systemd-logind[1456]: Removed session 2. Feb 13 19:53:12.287124 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 40322 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:12.288616 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:12.292651 systemd-logind[1456]: New session 3 of user core. Feb 13 19:53:12.305016 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:53:12.360921 sshd[1559]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:12.365490 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:40322.service: Deactivated successfully. Feb 13 19:53:12.367358 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:53:12.368047 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:53:12.369000 systemd-logind[1456]: Removed session 3. Feb 13 19:53:12.713021 systemd-networkd[1403]: eth0: Gained IPv6LL Feb 13 19:53:12.716355 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:53:12.718161 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:53:12.728004 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:53:12.730411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:12.732554 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:53:12.753746 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:53:12.753999 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:53:12.755715 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:53:12.758033 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:53:13.396115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:13.397736 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:53:13.399444 systemd[1]: Startup finished in 674ms (kernel) + 5.126s (initrd) + 3.963s (userspace) = 9.764s. Feb 13 19:53:13.401938 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:13.800716 kubelet[1587]: E0213 19:53:13.800521 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:13.804605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:13.804831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:22.371427 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:47116.service - OpenSSH per-connection server daemon (10.0.0.1:47116). Feb 13 19:53:22.408455 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 47116 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:22.410000 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:22.413608 systemd-logind[1456]: New session 4 of user core. Feb 13 19:53:22.426888 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:53:22.479680 sshd[1600]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:22.490368 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:47116.service: Deactivated successfully. Feb 13 19:53:22.492181 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:53:22.493578 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:53:22.494766 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:47120.service - OpenSSH per-connection server daemon (10.0.0.1:47120). Feb 13 19:53:22.495424 systemd-logind[1456]: Removed session 4. Feb 13 19:53:22.531212 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 47120 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:22.532651 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:22.536340 systemd-logind[1456]: New session 5 of user core. Feb 13 19:53:22.550906 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:53:22.599864 sshd[1607]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:22.612280 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:47120.service: Deactivated successfully. Feb 13 19:53:22.613792 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:53:22.615047 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:53:22.616153 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:47128.service - OpenSSH per-connection server daemon (10.0.0.1:47128). Feb 13 19:53:22.616833 systemd-logind[1456]: Removed session 5. Feb 13 19:53:22.652457 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 47128 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:22.653665 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:22.657104 systemd-logind[1456]: New session 6 of user core. Feb 13 19:53:22.666876 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:53:22.719759 sshd[1614]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:22.726185 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:47128.service: Deactivated successfully. Feb 13 19:53:22.727743 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:53:22.729045 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:53:22.737011 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:47142.service - OpenSSH per-connection server daemon (10.0.0.1:47142). Feb 13 19:53:22.737816 systemd-logind[1456]: Removed session 6. Feb 13 19:53:22.768620 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 47142 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:22.769879 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:22.773441 systemd-logind[1456]: New session 7 of user core. Feb 13 19:53:22.783888 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:53:22.840007 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:53:22.840358 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:53:22.860679 sudo[1624]: pam_unix(sudo:session): session closed for user root Feb 13 19:53:22.862396 sshd[1621]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:22.882483 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:47142.service: Deactivated successfully. Feb 13 19:53:22.884145 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:53:22.885591 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:53:22.886834 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:47150.service - OpenSSH per-connection server daemon (10.0.0.1:47150). Feb 13 19:53:22.887564 systemd-logind[1456]: Removed session 7. Feb 13 19:53:22.924467 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 47150 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:22.925960 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:22.929611 systemd-logind[1456]: New session 8 of user core. Feb 13 19:53:22.938877 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:53:22.991553 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:53:22.991958 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:53:22.995329 sudo[1633]: pam_unix(sudo:session): session closed for user root Feb 13 19:53:23.001452 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:53:23.001841 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:53:23.022002 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:53:23.023625 auditctl[1636]: No rules Feb 13 19:53:23.024824 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:53:23.025068 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:53:23.026715 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:53:23.054618 augenrules[1654]: No rules Feb 13 19:53:23.056280 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:53:23.057531 sudo[1632]: pam_unix(sudo:session): session closed for user root Feb 13 19:53:23.059266 sshd[1629]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:23.069431 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:47150.service: Deactivated successfully. Feb 13 19:53:23.070949 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:53:23.072637 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:53:23.083017 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:47166.service - OpenSSH per-connection server daemon (10.0.0.1:47166). Feb 13 19:53:23.083877 systemd-logind[1456]: Removed session 8. Feb 13 19:53:23.115435 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 47166 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:53:23.116790 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:23.120504 systemd-logind[1456]: New session 9 of user core. Feb 13 19:53:23.129916 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:53:23.182845 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:53:23.183223 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:53:23.457996 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:53:23.458142 (dockerd)[1684]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:53:23.727171 dockerd[1684]: time="2025-02-13T19:53:23.727035304Z" level=info msg="Starting up" Feb 13 19:53:23.827384 dockerd[1684]: time="2025-02-13T19:53:23.827332537Z" level=info msg="Loading containers: start." Feb 13 19:53:23.929817 kernel: Initializing XFRM netlink socket Feb 13 19:53:23.960021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:53:23.968020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:24.001348 systemd-networkd[1403]: docker0: Link UP Feb 13 19:53:24.176687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:24.180904 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:24.223143 kubelet[1793]: E0213 19:53:24.223086 1793 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:24.229987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:24.230218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:24.369261 dockerd[1684]: time="2025-02-13T19:53:24.369154787Z" level=info msg="Loading containers: done." Feb 13 19:53:24.383234 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2391929142-merged.mount: Deactivated successfully. Feb 13 19:53:24.386879 dockerd[1684]: time="2025-02-13T19:53:24.386839152Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:53:24.386947 dockerd[1684]: time="2025-02-13T19:53:24.386934541Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:53:24.387085 dockerd[1684]: time="2025-02-13T19:53:24.387058854Z" level=info msg="Daemon has completed initialization" Feb 13 19:53:24.421748 dockerd[1684]: time="2025-02-13T19:53:24.421701571Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:53:24.422484 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:53:24.906255 containerd[1472]: time="2025-02-13T19:53:24.906219034Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:53:25.524990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294058034.mount: Deactivated successfully. Feb 13 19:53:26.371761 containerd[1472]: time="2025-02-13T19:53:26.371705946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:26.372377 containerd[1472]: time="2025-02-13T19:53:26.372313866Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:53:26.374821 containerd[1472]: time="2025-02-13T19:53:26.374789088Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:26.377682 containerd[1472]: time="2025-02-13T19:53:26.377644953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:26.378716 containerd[1472]: time="2025-02-13T19:53:26.378678602Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 1.472423319s" Feb 13 19:53:26.378751 containerd[1472]: time="2025-02-13T19:53:26.378713467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:53:26.379315 containerd[1472]: time="2025-02-13T19:53:26.379280601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:53:27.518878 containerd[1472]: time="2025-02-13T19:53:27.518819122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:27.519630 containerd[1472]: time="2025-02-13T19:53:27.519557327Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:53:27.520738 containerd[1472]: time="2025-02-13T19:53:27.520708596Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:27.523362 containerd[1472]: time="2025-02-13T19:53:27.523331194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:27.524379 containerd[1472]: time="2025-02-13T19:53:27.524346999Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.145031343s" Feb 13 19:53:27.524419 containerd[1472]: time="2025-02-13T19:53:27.524378187Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:53:27.524841 containerd[1472]: time="2025-02-13T19:53:27.524822691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:53:28.950960 containerd[1472]: time="2025-02-13T19:53:28.950896288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:28.951905 containerd[1472]: time="2025-02-13T19:53:28.951842893Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:53:28.953525 containerd[1472]: time="2025-02-13T19:53:28.953498408Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:28.956514 containerd[1472]: time="2025-02-13T19:53:28.956459391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:28.957457 containerd[1472]: time="2025-02-13T19:53:28.957417538Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.432520437s" Feb 13 19:53:28.957507 containerd[1472]: time="2025-02-13T19:53:28.957458454Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:53:28.958028 containerd[1472]: time="2025-02-13T19:53:28.957988158Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:53:29.852970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814384002.mount: Deactivated successfully. Feb 13 19:53:30.550006 containerd[1472]: time="2025-02-13T19:53:30.549943472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:30.550728 containerd[1472]: time="2025-02-13T19:53:30.550667911Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:53:30.551859 containerd[1472]: time="2025-02-13T19:53:30.551823278Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:30.553721 containerd[1472]: time="2025-02-13T19:53:30.553684418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:30.554294 containerd[1472]: time="2025-02-13T19:53:30.554246813Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.5962292s" Feb 13 19:53:30.554327 containerd[1472]: time="2025-02-13T19:53:30.554293069Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:53:30.554843 containerd[1472]: time="2025-02-13T19:53:30.554727063Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:53:31.126073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447727058.mount: Deactivated successfully. Feb 13 19:53:31.798235 containerd[1472]: time="2025-02-13T19:53:31.798179951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:31.799023 containerd[1472]: time="2025-02-13T19:53:31.798956337Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:53:31.800027 containerd[1472]: time="2025-02-13T19:53:31.799998321Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:31.802523 containerd[1472]: time="2025-02-13T19:53:31.802500814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:31.803718 containerd[1472]: time="2025-02-13T19:53:31.803674586Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.248916834s" Feb 13 19:53:31.803769 containerd[1472]: time="2025-02-13T19:53:31.803723387Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:53:31.804184 containerd[1472]: time="2025-02-13T19:53:31.804156169Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:53:32.262924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541414768.mount: Deactivated successfully. Feb 13 19:53:32.268335 containerd[1472]: time="2025-02-13T19:53:32.268284545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:32.268968 containerd[1472]: time="2025-02-13T19:53:32.268920368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:53:32.270091 containerd[1472]: time="2025-02-13T19:53:32.270056608Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:32.272123 containerd[1472]: time="2025-02-13T19:53:32.272093909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:32.272768 containerd[1472]: time="2025-02-13T19:53:32.272729201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.54528ms" Feb 13 19:53:32.272823 containerd[1472]: time="2025-02-13T19:53:32.272767031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:53:32.273302 containerd[1472]: time="2025-02-13T19:53:32.273264544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:53:32.801816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1163315920.mount: Deactivated successfully. Feb 13 19:53:34.480498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:53:34.489975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:34.641970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:34.647580 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:34.682624 kubelet[2042]: E0213 19:53:34.682555 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:34.686554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:34.686755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:34.926371 containerd[1472]: time="2025-02-13T19:53:34.926211687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:34.927082 containerd[1472]: time="2025-02-13T19:53:34.926999104Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:53:34.928186 containerd[1472]: time="2025-02-13T19:53:34.928148770Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:34.931497 containerd[1472]: time="2025-02-13T19:53:34.931444751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:34.932720 containerd[1472]: time="2025-02-13T19:53:34.932669268Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.659372012s" Feb 13 19:53:34.932769 containerd[1472]: time="2025-02-13T19:53:34.932718410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:53:36.942363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:36.952983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:36.978571 systemd[1]: Reloading requested from client PID 2078 ('systemctl') (unit session-9.scope)... Feb 13 19:53:36.978594 systemd[1]: Reloading... Feb 13 19:53:37.059809 zram_generator::config[2121]: No configuration found. Feb 13 19:53:37.310611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:53:37.386414 systemd[1]: Reloading finished in 407 ms. Feb 13 19:53:37.428610 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:53:37.428708 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:53:37.428996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:37.431380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:37.586840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:37.593093 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:53:37.632183 kubelet[2166]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:53:37.632183 kubelet[2166]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:53:37.632183 kubelet[2166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:53:37.632547 kubelet[2166]: I0213 19:53:37.632231 2166 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:53:38.111454 kubelet[2166]: I0213 19:53:38.111399 2166 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:53:38.111454 kubelet[2166]: I0213 19:53:38.111436 2166 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:53:38.111729 kubelet[2166]: I0213 19:53:38.111705 2166 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:53:38.130540 kubelet[2166]: E0213 19:53:38.130491 2166 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:38.131493 kubelet[2166]: I0213 19:53:38.131466 2166 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:53:38.139971 kubelet[2166]: E0213 19:53:38.139946 2166 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:53:38.140013 kubelet[2166]: I0213 19:53:38.139972 2166 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:53:38.144931 kubelet[2166]: I0213 19:53:38.144909 2166 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:53:38.145993 kubelet[2166]: I0213 19:53:38.145951 2166 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:53:38.146153 kubelet[2166]: I0213 19:53:38.145986 2166 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:53:38.146236 kubelet[2166]: I0213 19:53:38.146154 2166 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:53:38.146236 kubelet[2166]: I0213 19:53:38.146163 2166 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:53:38.146297 kubelet[2166]: I0213 19:53:38.146285 2166 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:53:38.148659 kubelet[2166]: I0213 19:53:38.148634 2166 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:53:38.148659 kubelet[2166]: I0213 19:53:38.148654 2166 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:53:38.148707 kubelet[2166]: I0213 19:53:38.148672 2166 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:53:38.148707 kubelet[2166]: I0213 19:53:38.148682 2166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:53:38.151281 kubelet[2166]: I0213 19:53:38.151263 2166 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:53:38.151617 kubelet[2166]: I0213 19:53:38.151583 2166 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:53:38.153158 kubelet[2166]: W0213 19:53:38.152644 2166 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:53:38.154643 kubelet[2166]: W0213 19:53:38.154443 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Feb 13 19:53:38.154643 kubelet[2166]: E0213 19:53:38.154510 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:38.155047 kubelet[2166]: I0213 19:53:38.155018 2166 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:53:38.155047 kubelet[2166]: I0213 19:53:38.155049 2166 server.go:1287] "Started kubelet" Feb 13 19:53:38.156273 kubelet[2166]: W0213 19:53:38.155555 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Feb 13 19:53:38.156273 kubelet[2166]: E0213 19:53:38.155602 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:38.156273 kubelet[2166]: I0213 19:53:38.155648 2166 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:53:38.156904 kubelet[2166]: I0213 19:53:38.156478 2166 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:53:38.157605 kubelet[2166]: I0213 19:53:38.157462 2166 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:53:38.157803 kubelet[2166]: I0213 19:53:38.157759 2166 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:53:38.158679 kubelet[2166]: I0213 19:53:38.158655 2166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:53:38.158749 kubelet[2166]: E0213 19:53:38.158728 2166 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:53:38.158848 kubelet[2166]: I0213 19:53:38.158829 2166 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:53:38.159581 kubelet[2166]: E0213 19:53:38.159224 2166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:53:38.159581 kubelet[2166]: I0213 19:53:38.159257 2166 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:53:38.159581 kubelet[2166]: I0213 19:53:38.159418 2166 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:53:38.159581 kubelet[2166]: I0213 19:53:38.159460 2166 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:53:38.160375 kubelet[2166]: W0213 19:53:38.160230 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Feb 13 19:53:38.160375 kubelet[2166]: E0213 19:53:38.160273 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:38.161036 kubelet[2166]: E0213 19:53:38.159008 2166 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dc960f4cb1e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:53:38.15503306 +0000 UTC m=+0.557076338,LastTimestamp:2025-02-13 19:53:38.15503306 +0000 UTC m=+0.557076338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:53:38.161036 kubelet[2166]: E0213 19:53:38.160969 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Feb 13 19:53:38.161274 kubelet[2166]: I0213 19:53:38.161259 2166 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:53:38.161332 kubelet[2166]: I0213 19:53:38.161313 2166 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:53:38.161417 kubelet[2166]: I0213 19:53:38.161399 2166 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:53:38.174735 kubelet[2166]: I0213 19:53:38.174691 2166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:53:38.176194 kubelet[2166]: I0213 19:53:38.176026 2166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:53:38.176194 kubelet[2166]: I0213 19:53:38.176046 2166 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:53:38.176194 kubelet[2166]: I0213 19:53:38.176064 2166 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:53:38.176194 kubelet[2166]: I0213 19:53:38.176073 2166 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:53:38.176194 kubelet[2166]: E0213 19:53:38.176120 2166 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:53:38.177459 kubelet[2166]: I0213 19:53:38.177229 2166 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:53:38.177459 kubelet[2166]: I0213 19:53:38.177242 2166 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:53:38.177459 kubelet[2166]: I0213 19:53:38.177258 2166 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:53:38.177768 kubelet[2166]: W0213 19:53:38.177693 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Feb 13 19:53:38.177768 kubelet[2166]: E0213 19:53:38.177726 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:38.259318 kubelet[2166]: E0213 19:53:38.259295 2166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:53:38.276682 kubelet[2166]: E0213 19:53:38.276654 2166 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:53:38.359903 kubelet[2166]: E0213 19:53:38.359858 2166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:53:38.362442 kubelet[2166]: E0213 19:53:38.362355 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Feb 13 19:53:38.460580 kubelet[2166]: E0213 19:53:38.460550 2166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:53:38.476867 kubelet[2166]: E0213 19:53:38.476813 2166 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:53:38.529532 kubelet[2166]: I0213 19:53:38.529503 2166 policy_none.go:49] "None policy: Start" Feb 13 19:53:38.529532 kubelet[2166]: I0213 19:53:38.529523 2166 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:53:38.529532 kubelet[2166]: I0213 19:53:38.529535 2166 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:53:38.535914 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:53:38.549414 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:53:38.552162 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:53:38.560551 kubelet[2166]: I0213 19:53:38.560523 2166 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:53:38.560611 kubelet[2166]: E0213 19:53:38.560587 2166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:53:38.560819 kubelet[2166]: I0213 19:53:38.560720 2166 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:53:38.560819 kubelet[2166]: I0213 19:53:38.560738 2166 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:53:38.560966 kubelet[2166]: I0213 19:53:38.560920 2166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:53:38.562187 kubelet[2166]: E0213 19:53:38.562154 2166 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:53:38.562232 kubelet[2166]: E0213 19:53:38.562188 2166 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:53:38.661890 kubelet[2166]: I0213 19:53:38.661803 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:53:38.662205 kubelet[2166]: E0213 19:53:38.662126 2166 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Feb 13 19:53:38.763005 kubelet[2166]: E0213 19:53:38.762963 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Feb 13 19:53:38.862965 kubelet[2166]: I0213 19:53:38.862949 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:53:38.863254 kubelet[2166]: E0213 19:53:38.863233 2166 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Feb 13 19:53:38.883732 systemd[1]: Created slice kubepods-burstable-pod8da7e997f9e8e6bdca6a718d540278d2.slice - libcontainer container kubepods-burstable-pod8da7e997f9e8e6bdca6a718d540278d2.slice. Feb 13 19:53:38.896523 kubelet[2166]: E0213 19:53:38.896497 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:53:38.899469 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:53:38.901210 kubelet[2166]: E0213 19:53:38.901186 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:53:38.902891 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:53:38.904339 kubelet[2166]: E0213 19:53:38.904313 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:53:38.963760 kubelet[2166]: I0213 19:53:38.963693 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:38.963760 kubelet[2166]: I0213 19:53:38.963725 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:38.963760 kubelet[2166]: I0213 19:53:38.963751 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:38.963869 kubelet[2166]: I0213 19:53:38.963770 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:53:38.963869 kubelet[2166]: I0213 19:53:38.963810 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:38.963869 kubelet[2166]: I0213 19:53:38.963827 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8da7e997f9e8e6bdca6a718d540278d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8da7e997f9e8e6bdca6a718d540278d2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:38.963869 kubelet[2166]: I0213 19:53:38.963843 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8da7e997f9e8e6bdca6a718d540278d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8da7e997f9e8e6bdca6a718d540278d2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:38.963869 kubelet[2166]: I0213 19:53:38.963860 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:38.963978 kubelet[2166]: I0213 19:53:38.963874 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8da7e997f9e8e6bdca6a718d540278d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8da7e997f9e8e6bdca6a718d540278d2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:38.982088 kubelet[2166]: W0213 19:53:38.982069 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Feb 13 19:53:38.982132 kubelet[2166]: E0213 19:53:38.982098 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:39.197384 kubelet[2166]: E0213 19:53:39.197354 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.197915 containerd[1472]: time="2025-02-13T19:53:39.197873187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8da7e997f9e8e6bdca6a718d540278d2,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:39.202058 kubelet[2166]: E0213 19:53:39.202023 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.202319 containerd[1472]: time="2025-02-13T19:53:39.202293056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:39.205552 kubelet[2166]: E0213 19:53:39.205530 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.205875 containerd[1472]: time="2025-02-13T19:53:39.205840759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:39.265031 kubelet[2166]: I0213 19:53:39.264975 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:53:39.265273 kubelet[2166]: E0213 19:53:39.265177 2166 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Feb 13 19:53:39.414296 kubelet[2166]: W0213 19:53:39.414236 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Feb 13 19:53:39.414347 kubelet[2166]: E0213 19:53:39.414322 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:39.472506 kubelet[2166]: W0213 19:53:39.472445 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Feb 13 19:53:39.472556 kubelet[2166]: E0213 19:53:39.472516 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:39.540177 kubelet[2166]: W0213 19:53:39.540132 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Feb 13 19:53:39.540235 kubelet[2166]: E0213 19:53:39.540176 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:39.564120 kubelet[2166]: E0213 19:53:39.564063 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Feb 13 19:53:39.667616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2129647320.mount: Deactivated successfully. Feb 13 19:53:39.674772 containerd[1472]: time="2025-02-13T19:53:39.674729193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:39.675671 containerd[1472]: time="2025-02-13T19:53:39.675626866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:39.676414 containerd[1472]: time="2025-02-13T19:53:39.676390809Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:39.677347 containerd[1472]: time="2025-02-13T19:53:39.677296477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:53:39.678133 containerd[1472]: time="2025-02-13T19:53:39.678090196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:53:39.678917 containerd[1472]: time="2025-02-13T19:53:39.678869187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:53:39.679850 containerd[1472]: time="2025-02-13T19:53:39.679803789Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:39.683458 containerd[1472]: time="2025-02-13T19:53:39.683424319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:39.684285 containerd[1472]: time="2025-02-13T19:53:39.684244678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.333076ms" Feb 13 19:53:39.685492 containerd[1472]: time="2025-02-13T19:53:39.685447804Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.108782ms" Feb 13 19:53:39.686559 containerd[1472]: time="2025-02-13T19:53:39.686530214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.587406ms" Feb 13 19:53:39.831991 containerd[1472]: time="2025-02-13T19:53:39.831319796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:39.831991 containerd[1472]: time="2025-02-13T19:53:39.831386170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:39.831991 containerd[1472]: time="2025-02-13T19:53:39.831429722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:39.831991 containerd[1472]: time="2025-02-13T19:53:39.831663000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:39.832891 containerd[1472]: time="2025-02-13T19:53:39.832688984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:39.832891 containerd[1472]: time="2025-02-13T19:53:39.832758033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:39.833085 containerd[1472]: time="2025-02-13T19:53:39.832942489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:39.833191 containerd[1472]: time="2025-02-13T19:53:39.833140611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:39.833931 containerd[1472]: time="2025-02-13T19:53:39.833660716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:39.833931 containerd[1472]: time="2025-02-13T19:53:39.833728693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:39.833931 containerd[1472]: time="2025-02-13T19:53:39.833745745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:39.833931 containerd[1472]: time="2025-02-13T19:53:39.833879977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:39.862914 systemd[1]: Started cri-containerd-9912bfae56407ef4d0725f50c95ec67b7c0371f37a7f4209616e3275da55a3bb.scope - libcontainer container 9912bfae56407ef4d0725f50c95ec67b7c0371f37a7f4209616e3275da55a3bb. Feb 13 19:53:39.867074 systemd[1]: Started cri-containerd-368515efd761cc6b123e19a5f05b3cbfb41c8bec155b5f25213de6c3a01316ce.scope - libcontainer container 368515efd761cc6b123e19a5f05b3cbfb41c8bec155b5f25213de6c3a01316ce. Feb 13 19:53:39.868914 systemd[1]: Started cri-containerd-58dce2e28952fdad1952202a1aa4a4e539e13fda5edfcd4d2ff868eb4daf7057.scope - libcontainer container 58dce2e28952fdad1952202a1aa4a4e539e13fda5edfcd4d2ff868eb4daf7057. Feb 13 19:53:39.903947 containerd[1472]: time="2025-02-13T19:53:39.903908405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"9912bfae56407ef4d0725f50c95ec67b7c0371f37a7f4209616e3275da55a3bb\"" Feb 13 19:53:39.904929 kubelet[2166]: E0213 19:53:39.904904 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.906888 containerd[1472]: time="2025-02-13T19:53:39.906813714Z" level=info msg="CreateContainer within sandbox \"9912bfae56407ef4d0725f50c95ec67b7c0371f37a7f4209616e3275da55a3bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:53:39.909734 containerd[1472]: time="2025-02-13T19:53:39.909705507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"368515efd761cc6b123e19a5f05b3cbfb41c8bec155b5f25213de6c3a01316ce\"" Feb 13 19:53:39.910868 containerd[1472]: time="2025-02-13T19:53:39.910820268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8da7e997f9e8e6bdca6a718d540278d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"58dce2e28952fdad1952202a1aa4a4e539e13fda5edfcd4d2ff868eb4daf7057\"" Feb 13 19:53:39.911526 kubelet[2166]: E0213 19:53:39.911498 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.911586 kubelet[2166]: E0213 19:53:39.911561 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.912976 containerd[1472]: time="2025-02-13T19:53:39.912933050Z" level=info msg="CreateContainer within sandbox \"368515efd761cc6b123e19a5f05b3cbfb41c8bec155b5f25213de6c3a01316ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:53:39.913433 containerd[1472]: time="2025-02-13T19:53:39.913105052Z" level=info msg="CreateContainer within sandbox \"58dce2e28952fdad1952202a1aa4a4e539e13fda5edfcd4d2ff868eb4daf7057\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:53:39.932091 containerd[1472]: time="2025-02-13T19:53:39.931995579Z" level=info msg="CreateContainer within sandbox \"9912bfae56407ef4d0725f50c95ec67b7c0371f37a7f4209616e3275da55a3bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3c6b0b97e0538d0b54712c62c1671ec19064fae604999b48b3fd905aeadda0b3\"" Feb 13 19:53:39.932648 containerd[1472]: time="2025-02-13T19:53:39.932614119Z" level=info msg="StartContainer for \"3c6b0b97e0538d0b54712c62c1671ec19064fae604999b48b3fd905aeadda0b3\"" Feb 13 19:53:39.937462 containerd[1472]: time="2025-02-13T19:53:39.937432756Z" level=info msg="CreateContainer within sandbox \"368515efd761cc6b123e19a5f05b3cbfb41c8bec155b5f25213de6c3a01316ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f1f28a47c464b4071415f4ab1ddfa10cf1a17bb66052e44f49e5a8e7df19e52\"" Feb 13 19:53:39.937972 containerd[1472]: time="2025-02-13T19:53:39.937937863Z" level=info msg="StartContainer for \"5f1f28a47c464b4071415f4ab1ddfa10cf1a17bb66052e44f49e5a8e7df19e52\"" Feb 13 19:53:39.939642 containerd[1472]: time="2025-02-13T19:53:39.939614497Z" level=info msg="CreateContainer within sandbox \"58dce2e28952fdad1952202a1aa4a4e539e13fda5edfcd4d2ff868eb4daf7057\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d0804351381dbf5cf94dd7fd6e8ed6f4e652e29145545542242e674b3a6353b\"" Feb 13 19:53:39.940265 containerd[1472]: time="2025-02-13T19:53:39.940242575Z" level=info msg="StartContainer for \"6d0804351381dbf5cf94dd7fd6e8ed6f4e652e29145545542242e674b3a6353b\"" Feb 13 19:53:39.959823 systemd[1]: Started cri-containerd-3c6b0b97e0538d0b54712c62c1671ec19064fae604999b48b3fd905aeadda0b3.scope - libcontainer container 3c6b0b97e0538d0b54712c62c1671ec19064fae604999b48b3fd905aeadda0b3. Feb 13 19:53:39.968904 systemd[1]: Started cri-containerd-5f1f28a47c464b4071415f4ab1ddfa10cf1a17bb66052e44f49e5a8e7df19e52.scope - libcontainer container 5f1f28a47c464b4071415f4ab1ddfa10cf1a17bb66052e44f49e5a8e7df19e52. Feb 13 19:53:39.970274 systemd[1]: Started cri-containerd-6d0804351381dbf5cf94dd7fd6e8ed6f4e652e29145545542242e674b3a6353b.scope - libcontainer container 6d0804351381dbf5cf94dd7fd6e8ed6f4e652e29145545542242e674b3a6353b. Feb 13 19:53:40.067060 kubelet[2166]: I0213 19:53:40.067024 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:53:40.067770 kubelet[2166]: E0213 19:53:40.067737 2166 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Feb 13 19:53:40.555834 containerd[1472]: time="2025-02-13T19:53:40.555769428Z" level=info msg="StartContainer for \"5f1f28a47c464b4071415f4ab1ddfa10cf1a17bb66052e44f49e5a8e7df19e52\" returns successfully" Feb 13 19:53:40.556283 containerd[1472]: time="2025-02-13T19:53:40.555954865Z" level=info msg="StartContainer for \"3c6b0b97e0538d0b54712c62c1671ec19064fae604999b48b3fd905aeadda0b3\" returns successfully" Feb 13 19:53:40.556283 containerd[1472]: time="2025-02-13T19:53:40.555979121Z" level=info msg="StartContainer for \"6d0804351381dbf5cf94dd7fd6e8ed6f4e652e29145545542242e674b3a6353b\" returns successfully" Feb 13 19:53:40.561702 kubelet[2166]: E0213 19:53:40.561669 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:53:40.561954 kubelet[2166]: E0213 19:53:40.561921 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:53:40.562135 kubelet[2166]: E0213 19:53:40.562041 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:40.562220 kubelet[2166]: E0213 19:53:40.562100 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:40.563987 kubelet[2166]: E0213 19:53:40.563964 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:53:40.564128 kubelet[2166]: E0213 19:53:40.564095 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:41.168620 kubelet[2166]: E0213 19:53:41.167761 2166 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:53:41.305744 kubelet[2166]: E0213 19:53:41.305698 2166 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 19:53:41.565307 kubelet[2166]: E0213 19:53:41.565271 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:53:41.565386 kubelet[2166]: E0213 19:53:41.565379 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:41.565610 kubelet[2166]: E0213 19:53:41.565582 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:53:41.565833 kubelet[2166]: E0213 19:53:41.565703 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:41.664136 kubelet[2166]: E0213 19:53:41.664092 2166 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 19:53:41.669267 kubelet[2166]: I0213 19:53:41.669239 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:53:41.673765 kubelet[2166]: I0213 19:53:41.673747 2166 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:53:41.673829 kubelet[2166]: E0213 19:53:41.673768 2166 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:53:41.760703 kubelet[2166]: I0213 19:53:41.760646 2166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:41.764542 kubelet[2166]: E0213 19:53:41.764516 2166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:41.764542 kubelet[2166]: I0213 19:53:41.764537 2166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:41.765817 kubelet[2166]: E0213 19:53:41.765757 2166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:41.765817 kubelet[2166]: I0213 19:53:41.765799 2166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:53:41.767189 kubelet[2166]: E0213 19:53:41.767143 2166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:53:42.151206 kubelet[2166]: I0213 19:53:42.151164 2166 apiserver.go:52] "Watching apiserver" Feb 13 19:53:42.160217 kubelet[2166]: I0213 19:53:42.160186 2166 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:53:42.566062 kubelet[2166]: I0213 19:53:42.566028 2166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:42.570324 kubelet[2166]: E0213 19:53:42.570293 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:43.029407 systemd[1]: Reloading requested from client PID 2441 ('systemctl') (unit session-9.scope)... Feb 13 19:53:43.029425 systemd[1]: Reloading... Feb 13 19:53:43.108814 zram_generator::config[2481]: No configuration found. Feb 13 19:53:43.217173 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:53:43.306186 systemd[1]: Reloading finished in 276 ms. Feb 13 19:53:43.350171 kubelet[2166]: I0213 19:53:43.350114 2166 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:53:43.350183 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:43.371247 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:53:43.371528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:43.371602 systemd[1]: kubelet.service: Consumed 1.008s CPU time, 127.7M memory peak, 0B memory swap peak. Feb 13 19:53:43.385166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:43.544017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:43.548562 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:53:43.587762 kubelet[2525]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:53:43.587762 kubelet[2525]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:53:43.587762 kubelet[2525]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:53:43.588135 kubelet[2525]: I0213 19:53:43.587757 2525 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:53:43.593763 kubelet[2525]: I0213 19:53:43.593729 2525 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:53:43.593763 kubelet[2525]: I0213 19:53:43.593755 2525 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:53:43.594001 kubelet[2525]: I0213 19:53:43.593979 2525 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:53:43.595109 kubelet[2525]: I0213 19:53:43.595087 2525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:53:43.597205 kubelet[2525]: I0213 19:53:43.597060 2525 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:53:43.603294 kubelet[2525]: E0213 19:53:43.603243 2525 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:53:43.603294 kubelet[2525]: I0213 19:53:43.603278 2525 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:53:43.608155 kubelet[2525]: I0213 19:53:43.608106 2525 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:53:43.608387 kubelet[2525]: I0213 19:53:43.608346 2525 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:53:43.608557 kubelet[2525]: I0213 19:53:43.608382 2525 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:53:43.608638 kubelet[2525]: I0213 19:53:43.608561 2525 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:53:43.608638 kubelet[2525]: I0213 19:53:43.608571 2525 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:53:43.608638 kubelet[2525]: I0213 19:53:43.608612 2525 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:53:43.608817 kubelet[2525]: I0213 19:53:43.608800 2525 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:53:43.608817 kubelet[2525]: I0213 19:53:43.608815 2525 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:53:43.609747 kubelet[2525]: I0213 19:53:43.608831 2525 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:53:43.609747 kubelet[2525]: I0213 19:53:43.608842 2525 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:53:43.609747 kubelet[2525]: I0213 19:53:43.609319 2525 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:53:43.609747 kubelet[2525]: I0213 19:53:43.609669 2525 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:53:43.611037 kubelet[2525]: I0213 19:53:43.611011 2525 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:53:43.611226 kubelet[2525]: I0213 19:53:43.611215 2525 server.go:1287] "Started kubelet" Feb 13 19:53:43.611655 kubelet[2525]: I0213 19:53:43.611631 2525 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:53:43.612176 kubelet[2525]: I0213 19:53:43.612123 2525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:53:43.614972 kubelet[2525]: I0213 19:53:43.614957 2525 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:53:43.615059 kubelet[2525]: I0213 19:53:43.614362 2525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:53:43.618872 kubelet[2525]: I0213 19:53:43.613166 2525 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:53:43.619670 kubelet[2525]: E0213 19:53:43.619212 2525 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:53:43.621063 kubelet[2525]: I0213 19:53:43.614428 2525 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:53:43.621806 kubelet[2525]: I0213 19:53:43.621276 2525 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:53:43.621806 kubelet[2525]: I0213 19:53:43.621717 2525 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:53:43.621964 kubelet[2525]: I0213 19:53:43.621951 2525 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:53:43.626830 kubelet[2525]: I0213 19:53:43.626814 2525 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:53:43.627066 kubelet[2525]: I0213 19:53:43.627054 2525 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:53:43.627190 kubelet[2525]: I0213 19:53:43.627172 2525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:53:43.627285 kubelet[2525]: E0213 19:53:43.627255 2525 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:53:43.633333 kubelet[2525]: I0213 19:53:43.633300 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:53:43.634645 kubelet[2525]: I0213 19:53:43.634621 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:53:43.634739 kubelet[2525]: I0213 19:53:43.634728 2525 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:53:43.634815 kubelet[2525]: I0213 19:53:43.634805 2525 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:53:43.634860 kubelet[2525]: I0213 19:53:43.634852 2525 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:53:43.634976 kubelet[2525]: E0213 19:53:43.634946 2525 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:53:43.656029 kubelet[2525]: I0213 19:53:43.656001 2525 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:53:43.656029 kubelet[2525]: I0213 19:53:43.656021 2525 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:53:43.656091 kubelet[2525]: I0213 19:53:43.656038 2525 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:53:43.656174 kubelet[2525]: I0213 19:53:43.656158 2525 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:53:43.656219 kubelet[2525]: I0213 19:53:43.656172 2525 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:53:43.656219 kubelet[2525]: I0213 19:53:43.656196 2525 policy_none.go:49] "None policy: Start" Feb 13 19:53:43.656219 kubelet[2525]: I0213 19:53:43.656205 2525 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:53:43.656219 kubelet[2525]: I0213 19:53:43.656215 2525 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:53:43.656371 kubelet[2525]: I0213 19:53:43.656356 2525 state_mem.go:75] "Updated machine memory state" Feb 13 19:53:43.660104 kubelet[2525]: I0213 19:53:43.660030 2525 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:53:43.660209 kubelet[2525]: I0213 19:53:43.660188 2525 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:53:43.660310 kubelet[2525]: I0213 19:53:43.660202 2525 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:53:43.660412 kubelet[2525]: I0213 19:53:43.660400 2525 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:53:43.661080 kubelet[2525]: E0213 19:53:43.661064 2525 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:53:43.736196 kubelet[2525]: I0213 19:53:43.736127 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:53:43.736196 kubelet[2525]: I0213 19:53:43.736157 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:43.736359 kubelet[2525]: I0213 19:53:43.736284 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:43.741873 kubelet[2525]: E0213 19:53:43.741830 2525 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:43.764854 kubelet[2525]: I0213 19:53:43.764838 2525 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:53:43.770885 kubelet[2525]: I0213 19:53:43.770841 2525 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:53:43.770935 kubelet[2525]: I0213 19:53:43.770898 2525 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:53:43.923597 kubelet[2525]: I0213 19:53:43.923465 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:43.923597 kubelet[2525]: I0213 19:53:43.923529 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:43.923597 kubelet[2525]: I0213 19:53:43.923547 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:43.923597 kubelet[2525]: I0213 19:53:43.923565 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:43.923597 kubelet[2525]: I0213 19:53:43.923595 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:53:43.923767 kubelet[2525]: I0213 19:53:43.923609 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8da7e997f9e8e6bdca6a718d540278d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8da7e997f9e8e6bdca6a718d540278d2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:43.923767 kubelet[2525]: I0213 19:53:43.923623 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8da7e997f9e8e6bdca6a718d540278d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8da7e997f9e8e6bdca6a718d540278d2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:43.923767 kubelet[2525]: I0213 19:53:43.923637 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8da7e997f9e8e6bdca6a718d540278d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8da7e997f9e8e6bdca6a718d540278d2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:53:43.923767 kubelet[2525]: I0213 19:53:43.923660 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:44.040635 kubelet[2525]: E0213 19:53:44.040590 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:44.041841 kubelet[2525]: E0213 19:53:44.041808 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:44.042843 kubelet[2525]: E0213 19:53:44.042825 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:44.610748 kubelet[2525]: I0213 19:53:44.610089 2525 apiserver.go:52] "Watching apiserver" Feb 13 19:53:44.622043 kubelet[2525]: I0213 19:53:44.621998 2525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:53:44.644166 kubelet[2525]: I0213 19:53:44.644134 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:44.644929 kubelet[2525]: E0213 19:53:44.644438 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:44.644929 kubelet[2525]: I0213 19:53:44.644701 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:53:44.664769 kubelet[2525]: E0213 19:53:44.664488 2525 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:53:44.664769 kubelet[2525]: E0213 19:53:44.664644 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:44.664769 kubelet[2525]: E0213 19:53:44.664730 2525 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:53:44.664955 kubelet[2525]: E0213 19:53:44.664822 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:44.704803 kubelet[2525]: I0213 19:53:44.703885 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.703869224 podStartE2EDuration="1.703869224s" podCreationTimestamp="2025-02-13 19:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:44.69185018 +0000 UTC m=+1.139514883" watchObservedRunningTime="2025-02-13 19:53:44.703869224 +0000 UTC m=+1.151533928" Feb 13 19:53:44.717756 kubelet[2525]: I0213 19:53:44.717698 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7176775850000001 podStartE2EDuration="1.717677585s" podCreationTimestamp="2025-02-13 19:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:44.70466117 +0000 UTC m=+1.152325863" watchObservedRunningTime="2025-02-13 19:53:44.717677585 +0000 UTC m=+1.165342288" Feb 13 19:53:44.733799 kubelet[2525]: I0213 19:53:44.732586 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.732572776 podStartE2EDuration="2.732572776s" podCreationTimestamp="2025-02-13 19:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:44.718245708 +0000 UTC m=+1.165910411" watchObservedRunningTime="2025-02-13 19:53:44.732572776 +0000 UTC m=+1.180237479" Feb 13 19:53:45.646011 kubelet[2525]: E0213 19:53:45.645971 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:45.646507 kubelet[2525]: E0213 19:53:45.646060 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:45.646507 kubelet[2525]: E0213 19:53:45.646181 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:48.056702 sudo[1665]: pam_unix(sudo:session): session closed for user root Feb 13 19:53:48.058332 sshd[1662]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:48.061792 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:47166.service: Deactivated successfully. Feb 13 19:53:48.063648 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:53:48.063858 systemd[1]: session-9.scope: Consumed 4.015s CPU time, 157.7M memory peak, 0B memory swap peak. Feb 13 19:53:48.064410 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:53:48.065334 systemd-logind[1456]: Removed session 9. Feb 13 19:53:49.393019 kubelet[2525]: I0213 19:53:49.392982 2525 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:53:49.393495 containerd[1472]: time="2025-02-13T19:53:49.393397900Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:53:49.393833 kubelet[2525]: I0213 19:53:49.393639 2525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:53:49.654151 systemd[1]: Created slice kubepods-besteffort-podc1ffa2eb_6271_4fef_bf4c_7c262b1ec2c2.slice - libcontainer container kubepods-besteffort-podc1ffa2eb_6271_4fef_bf4c_7c262b1ec2c2.slice. Feb 13 19:53:49.660985 kubelet[2525]: I0213 19:53:49.660945 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2-kube-proxy\") pod \"kube-proxy-h48pq\" (UID: \"c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2\") " pod="kube-system/kube-proxy-h48pq" Feb 13 19:53:49.660985 kubelet[2525]: I0213 19:53:49.660977 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2-xtables-lock\") pod \"kube-proxy-h48pq\" (UID: \"c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2\") " pod="kube-system/kube-proxy-h48pq" Feb 13 19:53:49.661128 kubelet[2525]: I0213 19:53:49.660997 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2-lib-modules\") pod \"kube-proxy-h48pq\" (UID: \"c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2\") " pod="kube-system/kube-proxy-h48pq" Feb 13 19:53:49.661128 kubelet[2525]: I0213 19:53:49.661015 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ws68\" (UniqueName: \"kubernetes.io/projected/c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2-kube-api-access-2ws68\") pod \"kube-proxy-h48pq\" (UID: \"c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2\") " pod="kube-system/kube-proxy-h48pq" Feb 13 19:53:49.765716 kubelet[2525]: E0213 19:53:49.765675 2525 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:53:49.765716 kubelet[2525]: E0213 19:53:49.765706 2525 projected.go:194] Error preparing data for projected volume kube-api-access-2ws68 for pod kube-system/kube-proxy-h48pq: configmap "kube-root-ca.crt" not found Feb 13 19:53:49.765872 kubelet[2525]: E0213 19:53:49.765760 2525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2-kube-api-access-2ws68 podName:c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2 nodeName:}" failed. No retries permitted until 2025-02-13 19:53:50.26573843 +0000 UTC m=+6.713403133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2ws68" (UniqueName: "kubernetes.io/projected/c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2-kube-api-access-2ws68") pod "kube-proxy-h48pq" (UID: "c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2") : configmap "kube-root-ca.crt" not found Feb 13 19:53:50.459703 systemd[1]: Created slice kubepods-besteffort-pod1798726e_66ad_45a7_8b69_eaf1a547f355.slice - libcontainer container kubepods-besteffort-pod1798726e_66ad_45a7_8b69_eaf1a547f355.slice. Feb 13 19:53:50.464859 kubelet[2525]: I0213 19:53:50.464819 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8drf\" (UniqueName: \"kubernetes.io/projected/1798726e-66ad-45a7-8b69-eaf1a547f355-kube-api-access-p8drf\") pod \"tigera-operator-7d68577dc5-cljbf\" (UID: \"1798726e-66ad-45a7-8b69-eaf1a547f355\") " pod="tigera-operator/tigera-operator-7d68577dc5-cljbf" Feb 13 19:53:50.465186 kubelet[2525]: I0213 19:53:50.464867 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1798726e-66ad-45a7-8b69-eaf1a547f355-var-lib-calico\") pod \"tigera-operator-7d68577dc5-cljbf\" (UID: \"1798726e-66ad-45a7-8b69-eaf1a547f355\") " pod="tigera-operator/tigera-operator-7d68577dc5-cljbf" Feb 13 19:53:50.565988 kubelet[2525]: E0213 19:53:50.565946 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:50.566460 containerd[1472]: time="2025-02-13T19:53:50.566424998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h48pq,Uid:c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:50.596287 containerd[1472]: time="2025-02-13T19:53:50.596200098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:50.596287 containerd[1472]: time="2025-02-13T19:53:50.596249222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:50.596287 containerd[1472]: time="2025-02-13T19:53:50.596265133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:50.596490 containerd[1472]: time="2025-02-13T19:53:50.596342700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:50.614937 systemd[1]: Started cri-containerd-ceea400a3f561fa34d8844ae8320044d66e5e97d0869b6ce796b3ed7ac51105d.scope - libcontainer container ceea400a3f561fa34d8844ae8320044d66e5e97d0869b6ce796b3ed7ac51105d. Feb 13 19:53:50.633995 containerd[1472]: time="2025-02-13T19:53:50.633952867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h48pq,Uid:c1ffa2eb-6271-4fef-bf4c-7c262b1ec2c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceea400a3f561fa34d8844ae8320044d66e5e97d0869b6ce796b3ed7ac51105d\"" Feb 13 19:53:50.634727 kubelet[2525]: E0213 19:53:50.634697 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:50.638997 containerd[1472]: time="2025-02-13T19:53:50.637201249Z" level=info msg="CreateContainer within sandbox \"ceea400a3f561fa34d8844ae8320044d66e5e97d0869b6ce796b3ed7ac51105d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:53:50.656337 containerd[1472]: time="2025-02-13T19:53:50.656308805Z" level=info msg="CreateContainer within sandbox \"ceea400a3f561fa34d8844ae8320044d66e5e97d0869b6ce796b3ed7ac51105d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ea0f5180cd6d19731005bc980b7a2ed3301d88e8141998d873045e62b5a72df\"" Feb 13 19:53:50.656842 containerd[1472]: time="2025-02-13T19:53:50.656792178Z" level=info msg="StartContainer for \"4ea0f5180cd6d19731005bc980b7a2ed3301d88e8141998d873045e62b5a72df\"" Feb 13 19:53:50.687917 systemd[1]: Started cri-containerd-4ea0f5180cd6d19731005bc980b7a2ed3301d88e8141998d873045e62b5a72df.scope - libcontainer container 4ea0f5180cd6d19731005bc980b7a2ed3301d88e8141998d873045e62b5a72df. Feb 13 19:53:50.715922 containerd[1472]: time="2025-02-13T19:53:50.715810924Z" level=info msg="StartContainer for \"4ea0f5180cd6d19731005bc980b7a2ed3301d88e8141998d873045e62b5a72df\" returns successfully" Feb 13 19:53:50.763314 containerd[1472]: time="2025-02-13T19:53:50.763268911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-cljbf,Uid:1798726e-66ad-45a7-8b69-eaf1a547f355,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:53:50.786913 containerd[1472]: time="2025-02-13T19:53:50.786668721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:50.786913 containerd[1472]: time="2025-02-13T19:53:50.786732623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:50.786913 containerd[1472]: time="2025-02-13T19:53:50.786747792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:50.787903 containerd[1472]: time="2025-02-13T19:53:50.787837933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:50.807947 systemd[1]: Started cri-containerd-d24c1ff4ae25bbfb81e6ec19b5a92101a79dea20531cba335e4e1a3ab0fced14.scope - libcontainer container d24c1ff4ae25bbfb81e6ec19b5a92101a79dea20531cba335e4e1a3ab0fced14. Feb 13 19:53:50.852224 containerd[1472]: time="2025-02-13T19:53:50.849930899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-cljbf,Uid:1798726e-66ad-45a7-8b69-eaf1a547f355,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d24c1ff4ae25bbfb81e6ec19b5a92101a79dea20531cba335e4e1a3ab0fced14\"" Feb 13 19:53:50.854025 containerd[1472]: time="2025-02-13T19:53:50.853980138Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:53:51.657443 kubelet[2525]: E0213 19:53:51.657411 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:51.665371 kubelet[2525]: I0213 19:53:51.665302 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h48pq" podStartSLOduration=2.665284044 podStartE2EDuration="2.665284044s" podCreationTimestamp="2025-02-13 19:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:51.665170308 +0000 UTC m=+8.112835011" watchObservedRunningTime="2025-02-13 19:53:51.665284044 +0000 UTC m=+8.112948747" Feb 13 19:53:51.884874 kubelet[2525]: E0213 19:53:51.884839 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:52.659654 kubelet[2525]: E0213 19:53:52.659616 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:52.659654 kubelet[2525]: E0213 19:53:52.659625 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:52.714538 kubelet[2525]: E0213 19:53:52.714515 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:52.946500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406731495.mount: Deactivated successfully. Feb 13 19:53:53.661980 kubelet[2525]: E0213 19:53:53.660885 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:53.661980 kubelet[2525]: E0213 19:53:53.660975 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:54.156673 kubelet[2525]: E0213 19:53:54.156649 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:54.165671 containerd[1472]: time="2025-02-13T19:53:54.165620730Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:54.247928 containerd[1472]: time="2025-02-13T19:53:54.247866748Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:53:54.324829 containerd[1472]: time="2025-02-13T19:53:54.324745982Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:54.377021 containerd[1472]: time="2025-02-13T19:53:54.376982721Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:54.377788 containerd[1472]: time="2025-02-13T19:53:54.377729351Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.523698035s" Feb 13 19:53:54.377828 containerd[1472]: time="2025-02-13T19:53:54.377792991Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:53:54.379299 containerd[1472]: time="2025-02-13T19:53:54.379271243Z" level=info msg="CreateContainer within sandbox \"d24c1ff4ae25bbfb81e6ec19b5a92101a79dea20531cba335e4e1a3ab0fced14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:53:54.662312 kubelet[2525]: E0213 19:53:54.662277 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:54.799391 containerd[1472]: time="2025-02-13T19:53:54.799350506Z" level=info msg="CreateContainer within sandbox \"d24c1ff4ae25bbfb81e6ec19b5a92101a79dea20531cba335e4e1a3ab0fced14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8ff75d71fe34d9af4673558425b11bfc7127981e903e327292a12b1bb4431781\"" Feb 13 19:53:54.799840 containerd[1472]: time="2025-02-13T19:53:54.799804428Z" level=info msg="StartContainer for \"8ff75d71fe34d9af4673558425b11bfc7127981e903e327292a12b1bb4431781\"" Feb 13 19:53:54.835094 systemd[1]: Started cri-containerd-8ff75d71fe34d9af4673558425b11bfc7127981e903e327292a12b1bb4431781.scope - libcontainer container 8ff75d71fe34d9af4673558425b11bfc7127981e903e327292a12b1bb4431781. Feb 13 19:53:55.371705 containerd[1472]: time="2025-02-13T19:53:55.371650358Z" level=info msg="StartContainer for \"8ff75d71fe34d9af4673558425b11bfc7127981e903e327292a12b1bb4431781\" returns successfully" Feb 13 19:53:56.637503 update_engine[1461]: I20250213 19:53:56.637450 1461 update_attempter.cc:509] Updating boot flags... Feb 13 19:53:56.660815 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2917) Feb 13 19:53:56.704335 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2916) Feb 13 19:53:56.728254 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2916) Feb 13 19:53:57.725734 kubelet[2525]: I0213 19:53:57.724914 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-cljbf" podStartSLOduration=4.199727437 podStartE2EDuration="7.724897135s" podCreationTimestamp="2025-02-13 19:53:50 +0000 UTC" firstStartedPulling="2025-02-13 19:53:50.853163439 +0000 UTC m=+7.300828143" lastFinishedPulling="2025-02-13 19:53:54.378333138 +0000 UTC m=+10.825997841" observedRunningTime="2025-02-13 19:53:55.675104805 +0000 UTC m=+12.122769508" watchObservedRunningTime="2025-02-13 19:53:57.724897135 +0000 UTC m=+14.172561838" Feb 13 19:53:57.737768 systemd[1]: Created slice kubepods-besteffort-pod77dd0bff_945c_4735_8b6e_68caa4f1eea0.slice - libcontainer container kubepods-besteffort-pod77dd0bff_945c_4735_8b6e_68caa4f1eea0.slice. Feb 13 19:53:57.770216 systemd[1]: Created slice kubepods-besteffort-pod6ea86ae5_8acb_4973_9787_565b31d02907.slice - libcontainer container kubepods-besteffort-pod6ea86ae5_8acb_4973_9787_565b31d02907.slice. Feb 13 19:53:57.815361 kubelet[2525]: I0213 19:53:57.815317 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77dd0bff-945c-4735-8b6e-68caa4f1eea0-tigera-ca-bundle\") pod \"calico-typha-786ddd4f67-972ds\" (UID: \"77dd0bff-945c-4735-8b6e-68caa4f1eea0\") " pod="calico-system/calico-typha-786ddd4f67-972ds" Feb 13 19:53:57.815361 kubelet[2525]: I0213 19:53:57.815360 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmfb\" (UniqueName: \"kubernetes.io/projected/77dd0bff-945c-4735-8b6e-68caa4f1eea0-kube-api-access-jtmfb\") pod \"calico-typha-786ddd4f67-972ds\" (UID: \"77dd0bff-945c-4735-8b6e-68caa4f1eea0\") " pod="calico-system/calico-typha-786ddd4f67-972ds" Feb 13 19:53:57.815542 kubelet[2525]: I0213 19:53:57.815382 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-flexvol-driver-host\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815542 kubelet[2525]: I0213 19:53:57.815400 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-cni-net-dir\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815542 kubelet[2525]: I0213 19:53:57.815419 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrznc\" (UniqueName: \"kubernetes.io/projected/6ea86ae5-8acb-4973-9787-565b31d02907-kube-api-access-xrznc\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815542 kubelet[2525]: I0213 19:53:57.815437 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-cni-bin-dir\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815542 kubelet[2525]: I0213 19:53:57.815478 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/77dd0bff-945c-4735-8b6e-68caa4f1eea0-typha-certs\") pod \"calico-typha-786ddd4f67-972ds\" (UID: \"77dd0bff-945c-4735-8b6e-68caa4f1eea0\") " pod="calico-system/calico-typha-786ddd4f67-972ds" Feb 13 19:53:57.815660 kubelet[2525]: I0213 19:53:57.815507 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-lib-modules\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815660 kubelet[2525]: I0213 19:53:57.815523 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ea86ae5-8acb-4973-9787-565b31d02907-tigera-ca-bundle\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815660 kubelet[2525]: I0213 19:53:57.815601 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-var-lib-calico\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815727 kubelet[2525]: I0213 19:53:57.815669 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-xtables-lock\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815760 kubelet[2525]: I0213 19:53:57.815729 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-policysync\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.815760 kubelet[2525]: I0213 19:53:57.815743 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6ea86ae5-8acb-4973-9787-565b31d02907-node-certs\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.816226 kubelet[2525]: I0213 19:53:57.816203 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-var-run-calico\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.816261 kubelet[2525]: I0213 19:53:57.816239 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6ea86ae5-8acb-4973-9787-565b31d02907-cni-log-dir\") pod \"calico-node-cqhnq\" (UID: \"6ea86ae5-8acb-4973-9787-565b31d02907\") " pod="calico-system/calico-node-cqhnq" Feb 13 19:53:57.918699 kubelet[2525]: E0213 19:53:57.918658 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:57.918699 kubelet[2525]: W0213 19:53:57.918685 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:57.918699 kubelet[2525]: E0213 19:53:57.918705 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:57.921026 kubelet[2525]: E0213 19:53:57.920995 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:57.921026 kubelet[2525]: W0213 19:53:57.921014 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:57.921026 kubelet[2525]: E0213 19:53:57.921038 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:57.921254 kubelet[2525]: E0213 19:53:57.921231 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:57.921254 kubelet[2525]: W0213 19:53:57.921239 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:57.921254 kubelet[2525]: E0213 19:53:57.921247 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.108026 kubelet[2525]: E0213 19:53:58.106718 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.108026 kubelet[2525]: W0213 19:53:58.106763 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.108026 kubelet[2525]: E0213 19:53:58.106795 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.112207 kubelet[2525]: E0213 19:53:58.112187 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.112287 kubelet[2525]: W0213 19:53:58.112271 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.112373 kubelet[2525]: E0213 19:53:58.112341 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.127157 kubelet[2525]: E0213 19:53:58.126366 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:53:58.128547 kubelet[2525]: I0213 19:53:58.128521 2525 status_manager.go:890] "Failed to get status for pod" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" pod="calico-system/csi-node-driver-rtnwd" err="pods \"csi-node-driver-rtnwd\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Feb 13 19:53:58.209895 kubelet[2525]: E0213 19:53:58.209857 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.209895 kubelet[2525]: W0213 19:53:58.209877 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.209895 kubelet[2525]: E0213 19:53:58.209895 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.210155 kubelet[2525]: E0213 19:53:58.210132 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.210155 kubelet[2525]: W0213 19:53:58.210144 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.210155 kubelet[2525]: E0213 19:53:58.210152 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.210336 kubelet[2525]: E0213 19:53:58.210314 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.210336 kubelet[2525]: W0213 19:53:58.210325 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.210381 kubelet[2525]: E0213 19:53:58.210332 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.210619 kubelet[2525]: E0213 19:53:58.210589 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.210619 kubelet[2525]: W0213 19:53:58.210610 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.210671 kubelet[2525]: E0213 19:53:58.210631 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.210969 kubelet[2525]: E0213 19:53:58.210953 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.210969 kubelet[2525]: W0213 19:53:58.210963 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.211030 kubelet[2525]: E0213 19:53:58.210972 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.211185 kubelet[2525]: E0213 19:53:58.211170 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.211185 kubelet[2525]: W0213 19:53:58.211181 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.211232 kubelet[2525]: E0213 19:53:58.211189 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.211384 kubelet[2525]: E0213 19:53:58.211369 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.211384 kubelet[2525]: W0213 19:53:58.211380 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.211430 kubelet[2525]: E0213 19:53:58.211389 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.211591 kubelet[2525]: E0213 19:53:58.211578 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.211591 kubelet[2525]: W0213 19:53:58.211587 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.211636 kubelet[2525]: E0213 19:53:58.211595 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.211824 kubelet[2525]: E0213 19:53:58.211810 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.211824 kubelet[2525]: W0213 19:53:58.211820 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.211871 kubelet[2525]: E0213 19:53:58.211828 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.212011 kubelet[2525]: E0213 19:53:58.211998 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.212011 kubelet[2525]: W0213 19:53:58.212007 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.212051 kubelet[2525]: E0213 19:53:58.212014 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.212226 kubelet[2525]: E0213 19:53:58.212211 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.212226 kubelet[2525]: W0213 19:53:58.212223 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.212293 kubelet[2525]: E0213 19:53:58.212234 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.212433 kubelet[2525]: E0213 19:53:58.212419 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.212433 kubelet[2525]: W0213 19:53:58.212429 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.212476 kubelet[2525]: E0213 19:53:58.212437 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.212619 kubelet[2525]: E0213 19:53:58.212606 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.212619 kubelet[2525]: W0213 19:53:58.212616 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.212667 kubelet[2525]: E0213 19:53:58.212623 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.212829 kubelet[2525]: E0213 19:53:58.212815 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.212829 kubelet[2525]: W0213 19:53:58.212826 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.212893 kubelet[2525]: E0213 19:53:58.212834 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.212999 kubelet[2525]: E0213 19:53:58.212986 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.212999 kubelet[2525]: W0213 19:53:58.212996 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.213042 kubelet[2525]: E0213 19:53:58.213003 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.213163 kubelet[2525]: E0213 19:53:58.213149 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.213163 kubelet[2525]: W0213 19:53:58.213162 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.213203 kubelet[2525]: E0213 19:53:58.213169 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.213337 kubelet[2525]: E0213 19:53:58.213323 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.213337 kubelet[2525]: W0213 19:53:58.213333 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.213380 kubelet[2525]: E0213 19:53:58.213340 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.213499 kubelet[2525]: E0213 19:53:58.213486 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.213499 kubelet[2525]: W0213 19:53:58.213496 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.213539 kubelet[2525]: E0213 19:53:58.213504 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.213667 kubelet[2525]: E0213 19:53:58.213654 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.213667 kubelet[2525]: W0213 19:53:58.213664 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.213711 kubelet[2525]: E0213 19:53:58.213671 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.213856 kubelet[2525]: E0213 19:53:58.213842 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.213856 kubelet[2525]: W0213 19:53:58.213852 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.213913 kubelet[2525]: E0213 19:53:58.213860 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.219229 kubelet[2525]: E0213 19:53:58.219202 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.219229 kubelet[2525]: W0213 19:53:58.219215 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.219229 kubelet[2525]: E0213 19:53:58.219225 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.219368 kubelet[2525]: I0213 19:53:58.219254 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6-socket-dir\") pod \"csi-node-driver-rtnwd\" (UID: \"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6\") " pod="calico-system/csi-node-driver-rtnwd" Feb 13 19:53:58.219483 kubelet[2525]: E0213 19:53:58.219464 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.219483 kubelet[2525]: W0213 19:53:58.219476 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.219540 kubelet[2525]: E0213 19:53:58.219491 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.219540 kubelet[2525]: I0213 19:53:58.219505 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6-registration-dir\") pod \"csi-node-driver-rtnwd\" (UID: \"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6\") " pod="calico-system/csi-node-driver-rtnwd" Feb 13 19:53:58.219770 kubelet[2525]: E0213 19:53:58.219741 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.219770 kubelet[2525]: W0213 19:53:58.219763 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.219840 kubelet[2525]: E0213 19:53:58.219797 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.220049 kubelet[2525]: E0213 19:53:58.220033 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.220049 kubelet[2525]: W0213 19:53:58.220043 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.220116 kubelet[2525]: E0213 19:53:58.220060 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.220311 kubelet[2525]: E0213 19:53:58.220284 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.220311 kubelet[2525]: W0213 19:53:58.220300 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.220311 kubelet[2525]: E0213 19:53:58.220319 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.220476 kubelet[2525]: I0213 19:53:58.220344 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6-kubelet-dir\") pod \"csi-node-driver-rtnwd\" (UID: \"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6\") " pod="calico-system/csi-node-driver-rtnwd" Feb 13 19:53:58.220707 kubelet[2525]: E0213 19:53:58.220580 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.220707 kubelet[2525]: W0213 19:53:58.220593 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.220707 kubelet[2525]: E0213 19:53:58.220608 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.220707 kubelet[2525]: I0213 19:53:58.220623 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzsc2\" (UniqueName: \"kubernetes.io/projected/a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6-kube-api-access-wzsc2\") pod \"csi-node-driver-rtnwd\" (UID: \"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6\") " pod="calico-system/csi-node-driver-rtnwd" Feb 13 19:53:58.221026 kubelet[2525]: E0213 19:53:58.220964 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.221026 kubelet[2525]: W0213 19:53:58.220993 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.221088 kubelet[2525]: E0213 19:53:58.221031 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.221113 kubelet[2525]: I0213 19:53:58.221093 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6-varrun\") pod \"csi-node-driver-rtnwd\" (UID: \"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6\") " pod="calico-system/csi-node-driver-rtnwd" Feb 13 19:53:58.221309 kubelet[2525]: E0213 19:53:58.221292 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.221309 kubelet[2525]: W0213 19:53:58.221305 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.221378 kubelet[2525]: E0213 19:53:58.221356 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.221494 kubelet[2525]: E0213 19:53:58.221480 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.221494 kubelet[2525]: W0213 19:53:58.221491 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.221564 kubelet[2525]: E0213 19:53:58.221517 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.221737 kubelet[2525]: E0213 19:53:58.221724 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.221737 kubelet[2525]: W0213 19:53:58.221735 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.221737 kubelet[2525]: E0213 19:53:58.221756 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.222067 kubelet[2525]: E0213 19:53:58.222045 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.222067 kubelet[2525]: W0213 19:53:58.222067 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.222158 kubelet[2525]: E0213 19:53:58.222083 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.222539 kubelet[2525]: E0213 19:53:58.222302 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.222539 kubelet[2525]: W0213 19:53:58.222318 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.222539 kubelet[2525]: E0213 19:53:58.222327 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.222622 kubelet[2525]: E0213 19:53:58.222556 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.222622 kubelet[2525]: W0213 19:53:58.222567 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.222622 kubelet[2525]: E0213 19:53:58.222577 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.222834 kubelet[2525]: E0213 19:53:58.222819 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.222870 kubelet[2525]: W0213 19:53:58.222831 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.222870 kubelet[2525]: E0213 19:53:58.222849 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.223065 kubelet[2525]: E0213 19:53:58.223050 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.223065 kubelet[2525]: W0213 19:53:58.223061 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.223155 kubelet[2525]: E0213 19:53:58.223069 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.322327 kubelet[2525]: E0213 19:53:58.322264 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.322327 kubelet[2525]: W0213 19:53:58.322292 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.322327 kubelet[2525]: E0213 19:53:58.322317 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.322809 kubelet[2525]: E0213 19:53:58.322757 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.322809 kubelet[2525]: W0213 19:53:58.322798 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.322963 kubelet[2525]: E0213 19:53:58.322823 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.323046 kubelet[2525]: E0213 19:53:58.323031 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.323046 kubelet[2525]: W0213 19:53:58.323042 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.323091 kubelet[2525]: E0213 19:53:58.323058 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.323316 kubelet[2525]: E0213 19:53:58.323289 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.323316 kubelet[2525]: W0213 19:53:58.323304 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.323480 kubelet[2525]: E0213 19:53:58.323376 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.323675 kubelet[2525]: E0213 19:53:58.323651 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.323720 kubelet[2525]: W0213 19:53:58.323674 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.323720 kubelet[2525]: E0213 19:53:58.323705 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.324007 kubelet[2525]: E0213 19:53:58.323991 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.324007 kubelet[2525]: W0213 19:53:58.324003 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.324086 kubelet[2525]: E0213 19:53:58.324019 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.324279 kubelet[2525]: E0213 19:53:58.324254 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.324279 kubelet[2525]: W0213 19:53:58.324267 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.324329 kubelet[2525]: E0213 19:53:58.324282 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.324499 kubelet[2525]: E0213 19:53:58.324485 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.324499 kubelet[2525]: W0213 19:53:58.324497 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.324571 kubelet[2525]: E0213 19:53:58.324547 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.324730 kubelet[2525]: E0213 19:53:58.324715 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.324730 kubelet[2525]: W0213 19:53:58.324727 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.324802 kubelet[2525]: E0213 19:53:58.324787 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.324973 kubelet[2525]: E0213 19:53:58.324958 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.324973 kubelet[2525]: W0213 19:53:58.324969 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.325033 kubelet[2525]: E0213 19:53:58.324994 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.325199 kubelet[2525]: E0213 19:53:58.325182 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.325199 kubelet[2525]: W0213 19:53:58.325193 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.325254 kubelet[2525]: E0213 19:53:58.325208 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.325441 kubelet[2525]: E0213 19:53:58.325424 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.325441 kubelet[2525]: W0213 19:53:58.325438 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.325542 kubelet[2525]: E0213 19:53:58.325453 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.325902 kubelet[2525]: E0213 19:53:58.325728 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.325902 kubelet[2525]: W0213 19:53:58.325759 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.325902 kubelet[2525]: E0213 19:53:58.325797 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.326041 kubelet[2525]: E0213 19:53:58.326024 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.326041 kubelet[2525]: W0213 19:53:58.326036 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.326137 kubelet[2525]: E0213 19:53:58.326052 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.326356 kubelet[2525]: E0213 19:53:58.326306 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.326356 kubelet[2525]: W0213 19:53:58.326317 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.326409 kubelet[2525]: E0213 19:53:58.326346 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.326671 kubelet[2525]: E0213 19:53:58.326583 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.326755 kubelet[2525]: W0213 19:53:58.326725 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.326827 kubelet[2525]: E0213 19:53:58.326769 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.327458 kubelet[2525]: E0213 19:53:58.327061 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.327458 kubelet[2525]: W0213 19:53:58.327079 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.327458 kubelet[2525]: E0213 19:53:58.327130 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.327458 kubelet[2525]: E0213 19:53:58.327337 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.327458 kubelet[2525]: W0213 19:53:58.327357 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.327458 kubelet[2525]: E0213 19:53:58.327372 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.327619 kubelet[2525]: E0213 19:53:58.327590 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.327619 kubelet[2525]: W0213 19:53:58.327598 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.327619 kubelet[2525]: E0213 19:53:58.327606 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.327965 kubelet[2525]: E0213 19:53:58.327940 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.327965 kubelet[2525]: W0213 19:53:58.327956 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.328145 kubelet[2525]: E0213 19:53:58.327973 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.328229 kubelet[2525]: E0213 19:53:58.328205 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.328229 kubelet[2525]: W0213 19:53:58.328217 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.328373 kubelet[2525]: E0213 19:53:58.328243 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.328509 kubelet[2525]: E0213 19:53:58.328484 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.328509 kubelet[2525]: W0213 19:53:58.328496 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.328663 kubelet[2525]: E0213 19:53:58.328602 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.328761 kubelet[2525]: E0213 19:53:58.328738 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.328761 kubelet[2525]: W0213 19:53:58.328757 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.329221 kubelet[2525]: E0213 19:53:58.329193 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.329221 kubelet[2525]: W0213 19:53:58.329220 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.329289 kubelet[2525]: E0213 19:53:58.329233 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.329312 kubelet[2525]: E0213 19:53:58.328796 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.329678 kubelet[2525]: E0213 19:53:58.329599 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.329678 kubelet[2525]: W0213 19:53:58.329623 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.329678 kubelet[2525]: E0213 19:53:58.329636 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.332526 kubelet[2525]: E0213 19:53:58.332503 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:58.332526 kubelet[2525]: W0213 19:53:58.332518 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:58.332994 kubelet[2525]: E0213 19:53:58.332529 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:58.342682 kubelet[2525]: E0213 19:53:58.342636 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:58.343371 containerd[1472]: time="2025-02-13T19:53:58.343338935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-786ddd4f67-972ds,Uid:77dd0bff-945c-4735-8b6e-68caa4f1eea0,Namespace:calico-system,Attempt:0,}" Feb 13 19:53:58.372690 kubelet[2525]: E0213 19:53:58.372603 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:58.373166 containerd[1472]: time="2025-02-13T19:53:58.373121465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cqhnq,Uid:6ea86ae5-8acb-4973-9787-565b31d02907,Namespace:calico-system,Attempt:0,}" Feb 13 19:53:58.398953 containerd[1472]: time="2025-02-13T19:53:58.398872274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:58.398953 containerd[1472]: time="2025-02-13T19:53:58.398918913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:58.398953 containerd[1472]: time="2025-02-13T19:53:58.398930464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:58.399128 containerd[1472]: time="2025-02-13T19:53:58.399009264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:58.404087 containerd[1472]: time="2025-02-13T19:53:58.404013439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:58.404143 containerd[1472]: time="2025-02-13T19:53:58.404064686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:58.404143 containerd[1472]: time="2025-02-13T19:53:58.404098770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:58.404233 containerd[1472]: time="2025-02-13T19:53:58.404202918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:58.419908 systemd[1]: Started cri-containerd-bcd53ae6b5f038c641c9391ea1674ba2941454689ac555bab25f598a0f762ba5.scope - libcontainer container bcd53ae6b5f038c641c9391ea1674ba2941454689ac555bab25f598a0f762ba5. Feb 13 19:53:58.423648 systemd[1]: Started cri-containerd-48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9.scope - libcontainer container 48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9. Feb 13 19:53:58.450143 containerd[1472]: time="2025-02-13T19:53:58.449947584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cqhnq,Uid:6ea86ae5-8acb-4973-9787-565b31d02907,Namespace:calico-system,Attempt:0,} returns sandbox id \"48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9\"" Feb 13 19:53:58.452350 kubelet[2525]: E0213 19:53:58.452321 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:58.453044 containerd[1472]: time="2025-02-13T19:53:58.452996664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:53:58.461288 containerd[1472]: time="2025-02-13T19:53:58.461246480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-786ddd4f67-972ds,Uid:77dd0bff-945c-4735-8b6e-68caa4f1eea0,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcd53ae6b5f038c641c9391ea1674ba2941454689ac555bab25f598a0f762ba5\"" Feb 13 19:53:58.462019 kubelet[2525]: E0213 19:53:58.461979 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:59.638055 kubelet[2525]: E0213 19:53:59.638007 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:54:00.289108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555970350.mount: Deactivated successfully. Feb 13 19:54:00.417706 containerd[1472]: time="2025-02-13T19:54:00.417638393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:00.418619 containerd[1472]: time="2025-02-13T19:54:00.418575577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:54:00.419900 containerd[1472]: time="2025-02-13T19:54:00.419855841Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:00.422433 containerd[1472]: time="2025-02-13T19:54:00.422374999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:00.423259 containerd[1472]: time="2025-02-13T19:54:00.423224848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.970179323s" Feb 13 19:54:00.423313 containerd[1472]: time="2025-02-13T19:54:00.423261948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:54:00.424010 containerd[1472]: time="2025-02-13T19:54:00.423952686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:54:00.425160 containerd[1472]: time="2025-02-13T19:54:00.425086552Z" level=info msg="CreateContainer within sandbox \"48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:54:00.439602 containerd[1472]: time="2025-02-13T19:54:00.439554300Z" level=info msg="CreateContainer within sandbox \"48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc\"" Feb 13 19:54:00.440049 containerd[1472]: time="2025-02-13T19:54:00.440022275Z" level=info msg="StartContainer for \"345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc\"" Feb 13 19:54:00.475015 systemd[1]: Started cri-containerd-345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc.scope - libcontainer container 345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc. Feb 13 19:54:00.505339 containerd[1472]: time="2025-02-13T19:54:00.505300547Z" level=info msg="StartContainer for \"345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc\" returns successfully" Feb 13 19:54:00.517457 systemd[1]: cri-containerd-345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc.scope: Deactivated successfully. Feb 13 19:54:00.577741 containerd[1472]: time="2025-02-13T19:54:00.577569742Z" level=info msg="shim disconnected" id=345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc namespace=k8s.io Feb 13 19:54:00.577741 containerd[1472]: time="2025-02-13T19:54:00.577624145Z" level=warning msg="cleaning up after shim disconnected" id=345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc namespace=k8s.io Feb 13 19:54:00.577741 containerd[1472]: time="2025-02-13T19:54:00.577632060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:54:00.679870 kubelet[2525]: E0213 19:54:00.679689 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:01.268205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-345989c011eef20f281664aa6bfb9c06d726db96aca5999e168adb2cb6ab14bc-rootfs.mount: Deactivated successfully. Feb 13 19:54:01.635631 kubelet[2525]: E0213 19:54:01.635553 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:54:02.514284 containerd[1472]: time="2025-02-13T19:54:02.514228628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:02.515159 containerd[1472]: time="2025-02-13T19:54:02.515116105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 19:54:02.516298 containerd[1472]: time="2025-02-13T19:54:02.516270478Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:02.518297 containerd[1472]: time="2025-02-13T19:54:02.518258878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:02.518833 containerd[1472]: time="2025-02-13T19:54:02.518801945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.09481872s" Feb 13 19:54:02.518864 containerd[1472]: time="2025-02-13T19:54:02.518831681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:54:02.519792 containerd[1472]: time="2025-02-13T19:54:02.519736382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:54:02.527757 containerd[1472]: time="2025-02-13T19:54:02.527325742Z" level=info msg="CreateContainer within sandbox \"bcd53ae6b5f038c641c9391ea1674ba2941454689ac555bab25f598a0f762ba5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:54:02.540930 containerd[1472]: time="2025-02-13T19:54:02.540877416Z" level=info msg="CreateContainer within sandbox \"bcd53ae6b5f038c641c9391ea1674ba2941454689ac555bab25f598a0f762ba5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d6c1e3de6f3ec4b899efb3d777010cc2efdb65d1915d0d7e8355c4518e412209\"" Feb 13 19:54:02.541415 containerd[1472]: time="2025-02-13T19:54:02.541378975Z" level=info msg="StartContainer for \"d6c1e3de6f3ec4b899efb3d777010cc2efdb65d1915d0d7e8355c4518e412209\"" Feb 13 19:54:02.578904 systemd[1]: Started cri-containerd-d6c1e3de6f3ec4b899efb3d777010cc2efdb65d1915d0d7e8355c4518e412209.scope - libcontainer container d6c1e3de6f3ec4b899efb3d777010cc2efdb65d1915d0d7e8355c4518e412209. Feb 13 19:54:02.622513 containerd[1472]: time="2025-02-13T19:54:02.622458686Z" level=info msg="StartContainer for \"d6c1e3de6f3ec4b899efb3d777010cc2efdb65d1915d0d7e8355c4518e412209\" returns successfully" Feb 13 19:54:02.683584 kubelet[2525]: E0213 19:54:02.683510 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:02.693434 kubelet[2525]: I0213 19:54:02.693372 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-786ddd4f67-972ds" podStartSLOduration=1.636360513 podStartE2EDuration="5.693347406s" podCreationTimestamp="2025-02-13 19:53:57 +0000 UTC" firstStartedPulling="2025-02-13 19:53:58.462599986 +0000 UTC m=+14.910264689" lastFinishedPulling="2025-02-13 19:54:02.519586879 +0000 UTC m=+18.967251582" observedRunningTime="2025-02-13 19:54:02.692987976 +0000 UTC m=+19.140652679" watchObservedRunningTime="2025-02-13 19:54:02.693347406 +0000 UTC m=+19.141012109" Feb 13 19:54:03.636258 kubelet[2525]: E0213 19:54:03.636198 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:54:03.685251 kubelet[2525]: E0213 19:54:03.685218 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:04.686627 kubelet[2525]: E0213 19:54:04.686594 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:05.635372 kubelet[2525]: E0213 19:54:05.635292 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:54:07.636229 kubelet[2525]: E0213 19:54:07.636182 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:54:08.860376 containerd[1472]: time="2025-02-13T19:54:08.860314964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:08.861124 containerd[1472]: time="2025-02-13T19:54:08.861072222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:54:08.862127 containerd[1472]: time="2025-02-13T19:54:08.862094390Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:08.864608 containerd[1472]: time="2025-02-13T19:54:08.864564960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:08.865413 containerd[1472]: time="2025-02-13T19:54:08.865369708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.345599532s" Feb 13 19:54:08.865413 containerd[1472]: time="2025-02-13T19:54:08.865406346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:54:08.867471 containerd[1472]: time="2025-02-13T19:54:08.867432698Z" level=info msg="CreateContainer within sandbox \"48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:54:08.881746 containerd[1472]: time="2025-02-13T19:54:08.881716295Z" level=info msg="CreateContainer within sandbox \"48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0\"" Feb 13 19:54:08.882174 containerd[1472]: time="2025-02-13T19:54:08.882136587Z" level=info msg="StartContainer for \"0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0\"" Feb 13 19:54:08.919902 systemd[1]: Started cri-containerd-0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0.scope - libcontainer container 0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0. Feb 13 19:54:08.947509 containerd[1472]: time="2025-02-13T19:54:08.947411659Z" level=info msg="StartContainer for \"0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0\" returns successfully" Feb 13 19:54:09.635679 kubelet[2525]: E0213 19:54:09.635618 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:54:09.694725 kubelet[2525]: E0213 19:54:09.694700 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:09.963948 systemd[1]: cri-containerd-0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0.scope: Deactivated successfully. Feb 13 19:54:09.977509 kubelet[2525]: I0213 19:54:09.977480 2525 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:54:09.988169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0-rootfs.mount: Deactivated successfully. Feb 13 19:54:10.567701 systemd[1]: Created slice kubepods-burstable-podce6cf94a_45eb_47eb_acba_4bf09b224c4f.slice - libcontainer container kubepods-burstable-podce6cf94a_45eb_47eb_acba_4bf09b224c4f.slice. Feb 13 19:54:10.571837 systemd[1]: Created slice kubepods-besteffort-pod67b6a979_5b1c_436f_82c6_7e0dec8e8fa4.slice - libcontainer container kubepods-besteffort-pod67b6a979_5b1c_436f_82c6_7e0dec8e8fa4.slice. Feb 13 19:54:10.577123 systemd[1]: Created slice kubepods-besteffort-pod26630c90_52b3_480e_9c9a_098510701036.slice - libcontainer container kubepods-besteffort-pod26630c90_52b3_480e_9c9a_098510701036.slice. Feb 13 19:54:10.581081 systemd[1]: Created slice kubepods-burstable-podeef88bc7_df5c_4812_b65a_e088e32440c4.slice - libcontainer container kubepods-burstable-podeef88bc7_df5c_4812_b65a_e088e32440c4.slice. Feb 13 19:54:10.585359 systemd[1]: Created slice kubepods-besteffort-pod0df95c66_53bc_436d_8654_d036e666d8e1.slice - libcontainer container kubepods-besteffort-pod0df95c66_53bc_436d_8654_d036e666d8e1.slice. Feb 13 19:54:10.625992 kubelet[2525]: I0213 19:54:10.625949 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26630c90-52b3-480e-9c9a-098510701036-calico-apiserver-certs\") pod \"calico-apiserver-6ffbd469f7-5n427\" (UID: \"26630c90-52b3-480e-9c9a-098510701036\") " pod="calico-apiserver/calico-apiserver-6ffbd469f7-5n427" Feb 13 19:54:10.625992 kubelet[2525]: I0213 19:54:10.625980 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0df95c66-53bc-436d-8654-d036e666d8e1-calico-apiserver-certs\") pod \"calico-apiserver-6ffbd469f7-rkspr\" (UID: \"0df95c66-53bc-436d-8654-d036e666d8e1\") " pod="calico-apiserver/calico-apiserver-6ffbd469f7-rkspr" Feb 13 19:54:10.626132 kubelet[2525]: I0213 19:54:10.625999 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5m2t\" (UniqueName: \"kubernetes.io/projected/eef88bc7-df5c-4812-b65a-e088e32440c4-kube-api-access-n5m2t\") pod \"coredns-668d6bf9bc-2m2jz\" (UID: \"eef88bc7-df5c-4812-b65a-e088e32440c4\") " pod="kube-system/coredns-668d6bf9bc-2m2jz" Feb 13 19:54:10.626132 kubelet[2525]: I0213 19:54:10.626019 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99fmx\" (UniqueName: \"kubernetes.io/projected/ce6cf94a-45eb-47eb-acba-4bf09b224c4f-kube-api-access-99fmx\") pod \"coredns-668d6bf9bc-7bv58\" (UID: \"ce6cf94a-45eb-47eb-acba-4bf09b224c4f\") " pod="kube-system/coredns-668d6bf9bc-7bv58" Feb 13 19:54:10.626132 kubelet[2525]: I0213 19:54:10.626037 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef88bc7-df5c-4812-b65a-e088e32440c4-config-volume\") pod \"coredns-668d6bf9bc-2m2jz\" (UID: \"eef88bc7-df5c-4812-b65a-e088e32440c4\") " pod="kube-system/coredns-668d6bf9bc-2m2jz" Feb 13 19:54:10.626132 kubelet[2525]: I0213 19:54:10.626057 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce6cf94a-45eb-47eb-acba-4bf09b224c4f-config-volume\") pod \"coredns-668d6bf9bc-7bv58\" (UID: \"ce6cf94a-45eb-47eb-acba-4bf09b224c4f\") " pod="kube-system/coredns-668d6bf9bc-7bv58" Feb 13 19:54:10.626132 kubelet[2525]: I0213 19:54:10.626111 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44lnz\" (UniqueName: \"kubernetes.io/projected/26630c90-52b3-480e-9c9a-098510701036-kube-api-access-44lnz\") pod \"calico-apiserver-6ffbd469f7-5n427\" (UID: \"26630c90-52b3-480e-9c9a-098510701036\") " pod="calico-apiserver/calico-apiserver-6ffbd469f7-5n427" Feb 13 19:54:10.626268 kubelet[2525]: I0213 19:54:10.626140 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zrh9\" (UniqueName: \"kubernetes.io/projected/0df95c66-53bc-436d-8654-d036e666d8e1-kube-api-access-5zrh9\") pod \"calico-apiserver-6ffbd469f7-rkspr\" (UID: \"0df95c66-53bc-436d-8654-d036e666d8e1\") " pod="calico-apiserver/calico-apiserver-6ffbd469f7-rkspr" Feb 13 19:54:10.696722 kubelet[2525]: E0213 19:54:10.696691 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:10.727063 kubelet[2525]: I0213 19:54:10.727042 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28tn\" (UniqueName: \"kubernetes.io/projected/67b6a979-5b1c-436f-82c6-7e0dec8e8fa4-kube-api-access-q28tn\") pod \"calico-kube-controllers-764d679f55-4zqq8\" (UID: \"67b6a979-5b1c-436f-82c6-7e0dec8e8fa4\") " pod="calico-system/calico-kube-controllers-764d679f55-4zqq8" Feb 13 19:54:10.727102 kubelet[2525]: I0213 19:54:10.727089 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67b6a979-5b1c-436f-82c6-7e0dec8e8fa4-tigera-ca-bundle\") pod \"calico-kube-controllers-764d679f55-4zqq8\" (UID: \"67b6a979-5b1c-436f-82c6-7e0dec8e8fa4\") " pod="calico-system/calico-kube-controllers-764d679f55-4zqq8" Feb 13 19:54:10.870908 kubelet[2525]: E0213 19:54:10.870809 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:10.883228 kubelet[2525]: E0213 19:54:10.883190 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:11.079616 containerd[1472]: time="2025-02-13T19:54:11.079552964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffbd469f7-rkspr,Uid:0df95c66-53bc-436d-8654-d036e666d8e1,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:54:11.080046 containerd[1472]: time="2025-02-13T19:54:11.079652681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffbd469f7-5n427,Uid:26630c90-52b3-480e-9c9a-098510701036,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:54:11.080046 containerd[1472]: time="2025-02-13T19:54:11.079565978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2m2jz,Uid:eef88bc7-df5c-4812-b65a-e088e32440c4,Namespace:kube-system,Attempt:0,}" Feb 13 19:54:11.080046 containerd[1472]: time="2025-02-13T19:54:11.079565928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7bv58,Uid:ce6cf94a-45eb-47eb-acba-4bf09b224c4f,Namespace:kube-system,Attempt:0,}" Feb 13 19:54:11.177925 containerd[1472]: time="2025-02-13T19:54:11.176371543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764d679f55-4zqq8,Uid:67b6a979-5b1c-436f-82c6-7e0dec8e8fa4,Namespace:calico-system,Attempt:0,}" Feb 13 19:54:11.185446 containerd[1472]: time="2025-02-13T19:54:11.185396626Z" level=info msg="shim disconnected" id=0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0 namespace=k8s.io Feb 13 19:54:11.185446 containerd[1472]: time="2025-02-13T19:54:11.185442202Z" level=warning msg="cleaning up after shim disconnected" id=0ef93977058de47d9431a29367df0be19b1cc150666912a2fbeae41672239ef0 namespace=k8s.io Feb 13 19:54:11.185446 containerd[1472]: time="2025-02-13T19:54:11.185451289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:54:11.362301 containerd[1472]: time="2025-02-13T19:54:11.362245675Z" level=error msg="Failed to destroy network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.362983 containerd[1472]: time="2025-02-13T19:54:11.362954290Z" level=error msg="encountered an error cleaning up failed sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.363149 containerd[1472]: time="2025-02-13T19:54:11.363092310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764d679f55-4zqq8,Uid:67b6a979-5b1c-436f-82c6-7e0dec8e8fa4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.367667 containerd[1472]: time="2025-02-13T19:54:11.366376267Z" level=error msg="Failed to destroy network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.367667 containerd[1472]: time="2025-02-13T19:54:11.367017496Z" level=error msg="encountered an error cleaning up failed sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.367667 containerd[1472]: time="2025-02-13T19:54:11.367091955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffbd469f7-5n427,Uid:26630c90-52b3-480e-9c9a-098510701036,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.368467 kubelet[2525]: E0213 19:54:11.367222 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.368467 kubelet[2525]: E0213 19:54:11.367306 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764d679f55-4zqq8" Feb 13 19:54:11.368467 kubelet[2525]: E0213 19:54:11.367338 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764d679f55-4zqq8" Feb 13 19:54:11.368467 kubelet[2525]: E0213 19:54:11.367326 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.368663 kubelet[2525]: E0213 19:54:11.367398 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffbd469f7-5n427" Feb 13 19:54:11.368663 kubelet[2525]: E0213 19:54:11.367390 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-764d679f55-4zqq8_calico-system(67b6a979-5b1c-436f-82c6-7e0dec8e8fa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-764d679f55-4zqq8_calico-system(67b6a979-5b1c-436f-82c6-7e0dec8e8fa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764d679f55-4zqq8" podUID="67b6a979-5b1c-436f-82c6-7e0dec8e8fa4" Feb 13 19:54:11.368663 kubelet[2525]: E0213 19:54:11.367421 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffbd469f7-5n427" Feb 13 19:54:11.368793 containerd[1472]: time="2025-02-13T19:54:11.368459322Z" level=error msg="Failed to destroy network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.368828 kubelet[2525]: E0213 19:54:11.367466 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ffbd469f7-5n427_calico-apiserver(26630c90-52b3-480e-9c9a-098510701036)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ffbd469f7-5n427_calico-apiserver(26630c90-52b3-480e-9c9a-098510701036)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffbd469f7-5n427" podUID="26630c90-52b3-480e-9c9a-098510701036" Feb 13 19:54:11.369278 containerd[1472]: time="2025-02-13T19:54:11.369246966Z" level=error msg="encountered an error cleaning up failed sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.369323 containerd[1472]: time="2025-02-13T19:54:11.369295366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2m2jz,Uid:eef88bc7-df5c-4812-b65a-e088e32440c4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.369563 kubelet[2525]: E0213 19:54:11.369449 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.369563 kubelet[2525]: E0213 19:54:11.369489 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2m2jz" Feb 13 19:54:11.369563 kubelet[2525]: E0213 19:54:11.369504 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2m2jz" Feb 13 19:54:11.369667 kubelet[2525]: E0213 19:54:11.369537 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2m2jz_kube-system(eef88bc7-df5c-4812-b65a-e088e32440c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2m2jz_kube-system(eef88bc7-df5c-4812-b65a-e088e32440c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2m2jz" podUID="eef88bc7-df5c-4812-b65a-e088e32440c4" Feb 13 19:54:11.370295 containerd[1472]: time="2025-02-13T19:54:11.370249345Z" level=error msg="Failed to destroy network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.370626 containerd[1472]: time="2025-02-13T19:54:11.370573856Z" level=error msg="encountered an error cleaning up failed sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.370686 containerd[1472]: time="2025-02-13T19:54:11.370639189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffbd469f7-rkspr,Uid:0df95c66-53bc-436d-8654-d036e666d8e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.370847 kubelet[2525]: E0213 19:54:11.370815 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.370917 kubelet[2525]: E0213 19:54:11.370870 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffbd469f7-rkspr" Feb 13 19:54:11.370917 kubelet[2525]: E0213 19:54:11.370891 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffbd469f7-rkspr" Feb 13 19:54:11.371539 kubelet[2525]: E0213 19:54:11.370935 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ffbd469f7-rkspr_calico-apiserver(0df95c66-53bc-436d-8654-d036e666d8e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ffbd469f7-rkspr_calico-apiserver(0df95c66-53bc-436d-8654-d036e666d8e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffbd469f7-rkspr" podUID="0df95c66-53bc-436d-8654-d036e666d8e1" Feb 13 19:54:11.376391 containerd[1472]: time="2025-02-13T19:54:11.376338377Z" level=error msg="Failed to destroy network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.376750 containerd[1472]: time="2025-02-13T19:54:11.376709065Z" level=error msg="encountered an error cleaning up failed sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.376821 containerd[1472]: time="2025-02-13T19:54:11.376754810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7bv58,Uid:ce6cf94a-45eb-47eb-acba-4bf09b224c4f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.376989 kubelet[2525]: E0213 19:54:11.376956 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.377045 kubelet[2525]: E0213 19:54:11.377007 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7bv58" Feb 13 19:54:11.377045 kubelet[2525]: E0213 19:54:11.377029 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7bv58" Feb 13 19:54:11.377099 kubelet[2525]: E0213 19:54:11.377079 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7bv58_kube-system(ce6cf94a-45eb-47eb-acba-4bf09b224c4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7bv58_kube-system(ce6cf94a-45eb-47eb-acba-4bf09b224c4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7bv58" podUID="ce6cf94a-45eb-47eb-acba-4bf09b224c4f" Feb 13 19:54:11.641569 systemd[1]: Created slice kubepods-besteffort-poda35aff9a_f3a6_44d2_8ee2_7a8e5db0f8d6.slice - libcontainer container kubepods-besteffort-poda35aff9a_f3a6_44d2_8ee2_7a8e5db0f8d6.slice. Feb 13 19:54:11.643407 containerd[1472]: time="2025-02-13T19:54:11.643377947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rtnwd,Uid:a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6,Namespace:calico-system,Attempt:0,}" Feb 13 19:54:11.699295 kubelet[2525]: E0213 19:54:11.699219 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:11.700230 containerd[1472]: time="2025-02-13T19:54:11.700191576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:54:11.700429 kubelet[2525]: I0213 19:54:11.700406 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:11.701223 containerd[1472]: time="2025-02-13T19:54:11.701063379Z" level=info msg="StopPodSandbox for \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\"" Feb 13 19:54:11.701406 containerd[1472]: time="2025-02-13T19:54:11.701384012Z" level=info msg="Ensure that sandbox 534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768 in task-service has been cleanup successfully" Feb 13 19:54:11.701926 kubelet[2525]: I0213 19:54:11.701564 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:11.702188 containerd[1472]: time="2025-02-13T19:54:11.702077578Z" level=info msg="StopPodSandbox for \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\"" Feb 13 19:54:11.702404 containerd[1472]: time="2025-02-13T19:54:11.702363157Z" level=info msg="Ensure that sandbox 2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5 in task-service has been cleanup successfully" Feb 13 19:54:11.703553 kubelet[2525]: I0213 19:54:11.703249 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:11.703800 containerd[1472]: time="2025-02-13T19:54:11.703764497Z" level=info msg="StopPodSandbox for \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\"" Feb 13 19:54:11.705102 kubelet[2525]: I0213 19:54:11.705043 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:11.705826 containerd[1472]: time="2025-02-13T19:54:11.705272117Z" level=info msg="Ensure that sandbox c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3 in task-service has been cleanup successfully" Feb 13 19:54:11.706081 containerd[1472]: time="2025-02-13T19:54:11.706055473Z" level=info msg="StopPodSandbox for \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\"" Feb 13 19:54:11.706209 containerd[1472]: time="2025-02-13T19:54:11.706182813Z" level=info msg="Ensure that sandbox bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3 in task-service has been cleanup successfully" Feb 13 19:54:11.708439 kubelet[2525]: I0213 19:54:11.708409 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:11.709199 containerd[1472]: time="2025-02-13T19:54:11.709161415Z" level=info msg="StopPodSandbox for \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\"" Feb 13 19:54:11.710628 containerd[1472]: time="2025-02-13T19:54:11.710589706Z" level=info msg="Ensure that sandbox d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac in task-service has been cleanup successfully" Feb 13 19:54:11.749179 containerd[1472]: time="2025-02-13T19:54:11.748995964Z" level=error msg="StopPodSandbox for \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\" failed" error="failed to destroy network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.749324 kubelet[2525]: E0213 19:54:11.749268 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:11.749384 kubelet[2525]: E0213 19:54:11.749324 2525 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3"} Feb 13 19:54:11.749415 kubelet[2525]: E0213 19:54:11.749380 2525 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eef88bc7-df5c-4812-b65a-e088e32440c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:54:11.749415 kubelet[2525]: E0213 19:54:11.749403 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eef88bc7-df5c-4812-b65a-e088e32440c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2m2jz" podUID="eef88bc7-df5c-4812-b65a-e088e32440c4" Feb 13 19:54:11.750417 containerd[1472]: time="2025-02-13T19:54:11.750394139Z" level=error msg="StopPodSandbox for \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\" failed" error="failed to destroy network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.750884 kubelet[2525]: E0213 19:54:11.750831 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:11.751076 kubelet[2525]: E0213 19:54:11.751045 2525 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768"} Feb 13 19:54:11.751076 kubelet[2525]: E0213 19:54:11.751071 2525 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce6cf94a-45eb-47eb-acba-4bf09b224c4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:54:11.751283 kubelet[2525]: E0213 19:54:11.751092 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce6cf94a-45eb-47eb-acba-4bf09b224c4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7bv58" podUID="ce6cf94a-45eb-47eb-acba-4bf09b224c4f" Feb 13 19:54:11.752812 containerd[1472]: time="2025-02-13T19:54:11.752760708Z" level=error msg="StopPodSandbox for \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\" failed" error="failed to destroy network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.753064 kubelet[2525]: E0213 19:54:11.753024 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:11.753173 kubelet[2525]: E0213 19:54:11.753080 2525 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5"} Feb 13 19:54:11.753173 kubelet[2525]: E0213 19:54:11.753118 2525 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26630c90-52b3-480e-9c9a-098510701036\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:54:11.753173 kubelet[2525]: E0213 19:54:11.753141 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26630c90-52b3-480e-9c9a-098510701036\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffbd469f7-5n427" podUID="26630c90-52b3-480e-9c9a-098510701036" Feb 13 19:54:11.755356 containerd[1472]: time="2025-02-13T19:54:11.755314188Z" level=error msg="StopPodSandbox for \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\" failed" error="failed to destroy network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.755547 kubelet[2525]: E0213 19:54:11.755522 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:11.755587 kubelet[2525]: E0213 19:54:11.755549 2525 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac"} Feb 13 19:54:11.755587 kubelet[2525]: E0213 19:54:11.755578 2525 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0df95c66-53bc-436d-8654-d036e666d8e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:54:11.755659 kubelet[2525]: E0213 19:54:11.755594 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0df95c66-53bc-436d-8654-d036e666d8e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffbd469f7-rkspr" podUID="0df95c66-53bc-436d-8654-d036e666d8e1" Feb 13 19:54:11.757983 containerd[1472]: time="2025-02-13T19:54:11.757943442Z" level=error msg="StopPodSandbox for \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\" failed" error="failed to destroy network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:11.758111 kubelet[2525]: E0213 19:54:11.758054 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:11.758111 kubelet[2525]: E0213 19:54:11.758089 2525 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3"} Feb 13 19:54:11.758175 kubelet[2525]: E0213 19:54:11.758110 2525 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67b6a979-5b1c-436f-82c6-7e0dec8e8fa4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:54:11.758175 kubelet[2525]: E0213 19:54:11.758131 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67b6a979-5b1c-436f-82c6-7e0dec8e8fa4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764d679f55-4zqq8" podUID="67b6a979-5b1c-436f-82c6-7e0dec8e8fa4" Feb 13 19:54:11.989022 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5-shm.mount: Deactivated successfully. Feb 13 19:54:11.989126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac-shm.mount: Deactivated successfully. Feb 13 19:54:12.250870 containerd[1472]: time="2025-02-13T19:54:12.250726427Z" level=error msg="Failed to destroy network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:12.251213 containerd[1472]: time="2025-02-13T19:54:12.251094691Z" level=error msg="encountered an error cleaning up failed sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:12.251213 containerd[1472]: time="2025-02-13T19:54:12.251141048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rtnwd,Uid:a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:12.251389 kubelet[2525]: E0213 19:54:12.251318 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:12.251389 kubelet[2525]: E0213 19:54:12.251372 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rtnwd" Feb 13 19:54:12.251449 kubelet[2525]: E0213 19:54:12.251394 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rtnwd" Feb 13 19:54:12.251449 kubelet[2525]: E0213 19:54:12.251427 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rtnwd_calico-system(a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rtnwd_calico-system(a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:54:12.253072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95-shm.mount: Deactivated successfully. Feb 13 19:54:12.711022 kubelet[2525]: I0213 19:54:12.710987 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:12.711572 containerd[1472]: time="2025-02-13T19:54:12.711530266Z" level=info msg="StopPodSandbox for \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\"" Feb 13 19:54:12.711731 containerd[1472]: time="2025-02-13T19:54:12.711711408Z" level=info msg="Ensure that sandbox 9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95 in task-service has been cleanup successfully" Feb 13 19:54:12.736238 containerd[1472]: time="2025-02-13T19:54:12.736185243Z" level=error msg="StopPodSandbox for \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\" failed" error="failed to destroy network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:12.736472 kubelet[2525]: E0213 19:54:12.736423 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:12.736561 kubelet[2525]: E0213 19:54:12.736483 2525 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95"} Feb 13 19:54:12.736561 kubelet[2525]: E0213 19:54:12.736525 2525 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:54:12.736664 kubelet[2525]: E0213 19:54:12.736563 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rtnwd" podUID="a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6" Feb 13 19:54:15.831042 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:59818.service - OpenSSH per-connection server daemon (10.0.0.1:59818). Feb 13 19:54:15.887089 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 59818 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:15.889130 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:15.895021 systemd-logind[1456]: New session 10 of user core. Feb 13 19:54:15.900352 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:54:16.055516 sshd[3661]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:16.060519 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:54:16.060849 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:59818.service: Deactivated successfully. Feb 13 19:54:16.063478 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:54:16.064327 systemd-logind[1456]: Removed session 10. Feb 13 19:54:17.700890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802754565.mount: Deactivated successfully. Feb 13 19:54:18.982711 containerd[1472]: time="2025-02-13T19:54:18.982654189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:18.983538 containerd[1472]: time="2025-02-13T19:54:18.983501812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:54:18.984818 containerd[1472]: time="2025-02-13T19:54:18.984758356Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:18.987214 containerd[1472]: time="2025-02-13T19:54:18.987180923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:18.987763 containerd[1472]: time="2025-02-13T19:54:18.987718494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.287488837s" Feb 13 19:54:18.987763 containerd[1472]: time="2025-02-13T19:54:18.987756445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:54:18.995816 containerd[1472]: time="2025-02-13T19:54:18.995759939Z" level=info msg="CreateContainer within sandbox \"48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:54:19.055661 containerd[1472]: time="2025-02-13T19:54:19.055618566Z" level=info msg="CreateContainer within sandbox \"48262d3f23a53dc91206cda4c4c0082d8edc506992863cdd7c0925fa050e25e9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"178c3ce04dbe109e03e201ad335e5b4cab17dd673a8470b9f5267fa3b6bf402e\"" Feb 13 19:54:19.056246 containerd[1472]: time="2025-02-13T19:54:19.056208075Z" level=info msg="StartContainer for \"178c3ce04dbe109e03e201ad335e5b4cab17dd673a8470b9f5267fa3b6bf402e\"" Feb 13 19:54:19.119908 systemd[1]: Started cri-containerd-178c3ce04dbe109e03e201ad335e5b4cab17dd673a8470b9f5267fa3b6bf402e.scope - libcontainer container 178c3ce04dbe109e03e201ad335e5b4cab17dd673a8470b9f5267fa3b6bf402e. Feb 13 19:54:19.373832 containerd[1472]: time="2025-02-13T19:54:19.373742990Z" level=info msg="StartContainer for \"178c3ce04dbe109e03e201ad335e5b4cab17dd673a8470b9f5267fa3b6bf402e\" returns successfully" Feb 13 19:54:19.400208 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:54:19.400329 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:54:19.736517 kubelet[2525]: E0213 19:54:19.736394 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:19.756536 kubelet[2525]: I0213 19:54:19.756467 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cqhnq" podStartSLOduration=2.220638029 podStartE2EDuration="22.756424425s" podCreationTimestamp="2025-02-13 19:53:57 +0000 UTC" firstStartedPulling="2025-02-13 19:53:58.452766397 +0000 UTC m=+14.900431100" lastFinishedPulling="2025-02-13 19:54:18.988552793 +0000 UTC m=+35.436217496" observedRunningTime="2025-02-13 19:54:19.75612897 +0000 UTC m=+36.203793673" watchObservedRunningTime="2025-02-13 19:54:19.756424425 +0000 UTC m=+36.204089138" Feb 13 19:54:20.738412 kubelet[2525]: E0213 19:54:20.738128 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:20.758870 kernel: bpftool[3907]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:54:20.994737 systemd-networkd[1403]: vxlan.calico: Link UP Feb 13 19:54:20.994748 systemd-networkd[1403]: vxlan.calico: Gained carrier Feb 13 19:54:21.068712 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:46020.service - OpenSSH per-connection server daemon (10.0.0.1:46020). Feb 13 19:54:21.111298 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 46020 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:21.113710 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:21.117908 systemd-logind[1456]: New session 11 of user core. Feb 13 19:54:21.125916 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:54:21.256189 sshd[3965]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:21.260487 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:46020.service: Deactivated successfully. Feb 13 19:54:21.262841 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:54:21.263698 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:54:21.264728 systemd-logind[1456]: Removed session 11. Feb 13 19:54:22.635808 containerd[1472]: time="2025-02-13T19:54:22.635710178Z" level=info msg="StopPodSandbox for \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\"" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.677 [INFO][4028] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.677 [INFO][4028] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" iface="eth0" netns="/var/run/netns/cni-ce0951c5-1dc6-54c8-7eec-a7bd1bd986e1" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.677 [INFO][4028] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" iface="eth0" netns="/var/run/netns/cni-ce0951c5-1dc6-54c8-7eec-a7bd1bd986e1" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.678 [INFO][4028] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" iface="eth0" netns="/var/run/netns/cni-ce0951c5-1dc6-54c8-7eec-a7bd1bd986e1" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.678 [INFO][4028] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.678 [INFO][4028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.727 [INFO][4035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.728 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.728 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.734 [WARNING][4035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.734 [INFO][4035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.736 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:22.741185 containerd[1472]: 2025-02-13 19:54:22.738 [INFO][4028] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:22.741827 containerd[1472]: time="2025-02-13T19:54:22.741373334Z" level=info msg="TearDown network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\" successfully" Feb 13 19:54:22.741827 containerd[1472]: time="2025-02-13T19:54:22.741400365Z" level=info msg="StopPodSandbox for \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\" returns successfully" Feb 13 19:54:22.742419 containerd[1472]: time="2025-02-13T19:54:22.742379035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffbd469f7-5n427,Uid:26630c90-52b3-480e-9c9a-098510701036,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:54:22.743910 systemd[1]: run-netns-cni\x2dce0951c5\x2d1dc6\x2d54c8\x2d7eec\x2da7bd1bd986e1.mount: Deactivated successfully. Feb 13 19:54:22.792940 systemd-networkd[1403]: vxlan.calico: Gained IPv6LL Feb 13 19:54:22.974095 systemd-networkd[1403]: calic6a3a7c32a2: Link UP Feb 13 19:54:22.974430 systemd-networkd[1403]: calic6a3a7c32a2: Gained carrier Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.912 [INFO][4044] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0 calico-apiserver-6ffbd469f7- calico-apiserver 26630c90-52b3-480e-9c9a-098510701036 843 0 2025-02-13 19:53:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ffbd469f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6ffbd469f7-5n427 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic6a3a7c32a2 [] []}} ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-5n427" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.912 [INFO][4044] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-5n427" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.940 [INFO][4056] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" HandleID="k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.947 [INFO][4056] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" HandleID="k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f47b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6ffbd469f7-5n427", "timestamp":"2025-02-13 19:54:22.940145431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.947 [INFO][4056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.947 [INFO][4056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.947 [INFO][4056] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.949 [INFO][4056] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.953 [INFO][4056] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.956 [INFO][4056] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.957 [INFO][4056] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.959 [INFO][4056] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.959 [INFO][4056] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.960 [INFO][4056] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4 Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.964 [INFO][4056] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.968 [INFO][4056] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.968 [INFO][4056] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" host="localhost" Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.968 [INFO][4056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:22.986073 containerd[1472]: 2025-02-13 19:54:22.968 [INFO][4056] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" HandleID="k8s-pod-network.a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.987138 containerd[1472]: 2025-02-13 19:54:22.971 [INFO][4044] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-5n427" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0", GenerateName:"calico-apiserver-6ffbd469f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"26630c90-52b3-480e-9c9a-098510701036", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffbd469f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6ffbd469f7-5n427", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6a3a7c32a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:22.987138 containerd[1472]: 2025-02-13 19:54:22.972 [INFO][4044] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-5n427" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.987138 containerd[1472]: 2025-02-13 19:54:22.972 [INFO][4044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6a3a7c32a2 ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-5n427" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.987138 containerd[1472]: 2025-02-13 19:54:22.974 [INFO][4044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-5n427" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:22.987138 containerd[1472]: 2025-02-13 19:54:22.975 [INFO][4044] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-5n427" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0", GenerateName:"calico-apiserver-6ffbd469f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"26630c90-52b3-480e-9c9a-098510701036", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffbd469f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4", Pod:"calico-apiserver-6ffbd469f7-5n427", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6a3a7c32a2", MAC:"4e:b7:15:32:67:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:22.987138 containerd[1472]: 2025-02-13 19:54:22.982 [INFO][4044] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-5n427" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:23.012796 containerd[1472]: time="2025-02-13T19:54:23.012664546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:23.012796 containerd[1472]: time="2025-02-13T19:54:23.012757110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:23.012796 containerd[1472]: time="2025-02-13T19:54:23.012785794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:23.012975 containerd[1472]: time="2025-02-13T19:54:23.012860675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:23.035907 systemd[1]: Started cri-containerd-a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4.scope - libcontainer container a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4. Feb 13 19:54:23.046417 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:54:23.069552 containerd[1472]: time="2025-02-13T19:54:23.069513606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffbd469f7-5n427,Uid:26630c90-52b3-480e-9c9a-098510701036,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4\"" Feb 13 19:54:23.075410 containerd[1472]: time="2025-02-13T19:54:23.074569748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:54:24.392959 systemd-networkd[1403]: calic6a3a7c32a2: Gained IPv6LL Feb 13 19:54:25.563535 containerd[1472]: time="2025-02-13T19:54:25.563488087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:25.564826 containerd[1472]: time="2025-02-13T19:54:25.564793620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:54:25.566148 containerd[1472]: time="2025-02-13T19:54:25.566127646Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:25.568242 containerd[1472]: time="2025-02-13T19:54:25.568201543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:25.568837 containerd[1472]: time="2025-02-13T19:54:25.568810016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.494200544s" Feb 13 19:54:25.568960 containerd[1472]: time="2025-02-13T19:54:25.568838329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:54:25.570653 containerd[1472]: time="2025-02-13T19:54:25.570617482Z" level=info msg="CreateContainer within sandbox \"a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:54:25.584352 containerd[1472]: time="2025-02-13T19:54:25.584307994Z" level=info msg="CreateContainer within sandbox \"a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1c59dd8218d914973bff689cd43c5dc35014624a26272e6f4a5a514809fe8bf\"" Feb 13 19:54:25.584845 containerd[1472]: time="2025-02-13T19:54:25.584807142Z" level=info msg="StartContainer for \"d1c59dd8218d914973bff689cd43c5dc35014624a26272e6f4a5a514809fe8bf\"" Feb 13 19:54:25.615917 systemd[1]: Started cri-containerd-d1c59dd8218d914973bff689cd43c5dc35014624a26272e6f4a5a514809fe8bf.scope - libcontainer container d1c59dd8218d914973bff689cd43c5dc35014624a26272e6f4a5a514809fe8bf. Feb 13 19:54:25.636541 containerd[1472]: time="2025-02-13T19:54:25.636507741Z" level=info msg="StopPodSandbox for \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\"" Feb 13 19:54:25.637874 containerd[1472]: time="2025-02-13T19:54:25.637132695Z" level=info msg="StopPodSandbox for \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\"" Feb 13 19:54:25.637874 containerd[1472]: time="2025-02-13T19:54:25.637579525Z" level=info msg="StopPodSandbox for \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\"" Feb 13 19:54:25.677704 containerd[1472]: time="2025-02-13T19:54:25.677637338Z" level=info msg="StartContainer for \"d1c59dd8218d914973bff689cd43c5dc35014624a26272e6f4a5a514809fe8bf\" returns successfully" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.699 [INFO][4188] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.699 [INFO][4188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" iface="eth0" netns="/var/run/netns/cni-3356bc82-65e8-e7aa-aea0-61af0c377331" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.700 [INFO][4188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" iface="eth0" netns="/var/run/netns/cni-3356bc82-65e8-e7aa-aea0-61af0c377331" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.701 [INFO][4188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" iface="eth0" netns="/var/run/netns/cni-3356bc82-65e8-e7aa-aea0-61af0c377331" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.701 [INFO][4188] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.701 [INFO][4188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.729 [INFO][4229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.729 [INFO][4229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.729 [INFO][4229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.740 [WARNING][4229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.740 [INFO][4229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.742 [INFO][4229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:25.747296 containerd[1472]: 2025-02-13 19:54:25.744 [INFO][4188] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:25.749409 containerd[1472]: time="2025-02-13T19:54:25.747765791Z" level=info msg="TearDown network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\" successfully" Feb 13 19:54:25.749409 containerd[1472]: time="2025-02-13T19:54:25.747807900Z" level=info msg="StopPodSandbox for \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\" returns successfully" Feb 13 19:54:25.749409 containerd[1472]: time="2025-02-13T19:54:25.748662756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764d679f55-4zqq8,Uid:67b6a979-5b1c-436f-82c6-7e0dec8e8fa4,Namespace:calico-system,Attempt:1,}" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.714 [INFO][4205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.714 [INFO][4205] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" iface="eth0" netns="/var/run/netns/cni-8cb48a3f-5174-f267-a60a-67963ff6fc60" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.714 [INFO][4205] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" iface="eth0" netns="/var/run/netns/cni-8cb48a3f-5174-f267-a60a-67963ff6fc60" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.714 [INFO][4205] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" iface="eth0" netns="/var/run/netns/cni-8cb48a3f-5174-f267-a60a-67963ff6fc60" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.714 [INFO][4205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.714 [INFO][4205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.738 [INFO][4240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.738 [INFO][4240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.742 [INFO][4240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.749 [WARNING][4240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.749 [INFO][4240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.751 [INFO][4240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:25.760327 containerd[1472]: 2025-02-13 19:54:25.758 [INFO][4205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:25.760687 containerd[1472]: time="2025-02-13T19:54:25.760530134Z" level=info msg="TearDown network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\" successfully" Feb 13 19:54:25.760687 containerd[1472]: time="2025-02-13T19:54:25.760556964Z" level=info msg="StopPodSandbox for \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\" returns successfully" Feb 13 19:54:25.761766 containerd[1472]: time="2025-02-13T19:54:25.761623728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffbd469f7-rkspr,Uid:0df95c66-53bc-436d-8654-d036e666d8e1,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.707 [INFO][4191] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.710 [INFO][4191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" iface="eth0" netns="/var/run/netns/cni-52e5a3af-7243-da1f-1881-511351deb46b" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.710 [INFO][4191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" iface="eth0" netns="/var/run/netns/cni-52e5a3af-7243-da1f-1881-511351deb46b" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.710 [INFO][4191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" iface="eth0" netns="/var/run/netns/cni-52e5a3af-7243-da1f-1881-511351deb46b" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.710 [INFO][4191] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.710 [INFO][4191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.742 [INFO][4234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.742 [INFO][4234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.751 [INFO][4234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.756 [WARNING][4234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.756 [INFO][4234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.758 [INFO][4234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:25.765282 containerd[1472]: 2025-02-13 19:54:25.762 [INFO][4191] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:25.765661 containerd[1472]: time="2025-02-13T19:54:25.765621329Z" level=info msg="TearDown network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\" successfully" Feb 13 19:54:25.765661 containerd[1472]: time="2025-02-13T19:54:25.765642619Z" level=info msg="StopPodSandbox for \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\" returns successfully" Feb 13 19:54:25.766140 kubelet[2525]: E0213 19:54:25.766000 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:25.766564 containerd[1472]: time="2025-02-13T19:54:25.766266271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7bv58,Uid:ce6cf94a-45eb-47eb-acba-4bf09b224c4f,Namespace:kube-system,Attempt:1,}" Feb 13 19:54:25.900284 kubelet[2525]: I0213 19:54:25.899803 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ffbd469f7-5n427" podStartSLOduration=25.40099583 podStartE2EDuration="27.899758015s" podCreationTimestamp="2025-02-13 19:53:58 +0000 UTC" firstStartedPulling="2025-02-13 19:54:23.070704605 +0000 UTC m=+39.518369308" lastFinishedPulling="2025-02-13 19:54:25.56946679 +0000 UTC m=+42.017131493" observedRunningTime="2025-02-13 19:54:25.899560523 +0000 UTC m=+42.347225236" watchObservedRunningTime="2025-02-13 19:54:25.899758015 +0000 UTC m=+42.347422718" Feb 13 19:54:26.275011 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:46036.service - OpenSSH per-connection server daemon (10.0.0.1:46036). Feb 13 19:54:26.312874 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 46036 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:26.314475 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:26.318546 systemd-logind[1456]: New session 12 of user core. Feb 13 19:54:26.327916 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:54:26.479665 sshd[4258]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:26.487912 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:46036.service: Deactivated successfully. Feb 13 19:54:26.489949 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:54:26.490901 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:54:26.492285 systemd-logind[1456]: Removed session 12. Feb 13 19:54:26.502200 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:53982.service - OpenSSH per-connection server daemon (10.0.0.1:53982). Feb 13 19:54:26.534754 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 53982 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:26.536228 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:26.540153 systemd-logind[1456]: New session 13 of user core. Feb 13 19:54:26.550908 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:54:26.582240 systemd[1]: run-netns-cni\x2d3356bc82\x2d65e8\x2de7aa\x2daea0\x2d61af0c377331.mount: Deactivated successfully. Feb 13 19:54:26.582378 systemd[1]: run-netns-cni\x2d52e5a3af\x2d7243\x2dda1f\x2d1881\x2d511351deb46b.mount: Deactivated successfully. Feb 13 19:54:26.582465 systemd[1]: run-netns-cni\x2d8cb48a3f\x2d5174\x2df267\x2da60a\x2d67963ff6fc60.mount: Deactivated successfully. Feb 13 19:54:26.636381 containerd[1472]: time="2025-02-13T19:54:26.636343333Z" level=info msg="StopPodSandbox for \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\"" Feb 13 19:54:26.756869 kubelet[2525]: I0213 19:54:26.756837 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:54:26.892017 sshd[4281]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:26.912796 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:53982.service: Deactivated successfully. Feb 13 19:54:26.916655 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:54:26.922054 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:54:26.933543 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:53996.service - OpenSSH per-connection server daemon (10.0.0.1:53996). Feb 13 19:54:26.935231 systemd-logind[1456]: Removed session 13. Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.883 [INFO][4305] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.884 [INFO][4305] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" iface="eth0" netns="/var/run/netns/cni-3f42dd31-0aa0-48b9-29c6-58daab8445f9" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.884 [INFO][4305] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" iface="eth0" netns="/var/run/netns/cni-3f42dd31-0aa0-48b9-29c6-58daab8445f9" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.884 [INFO][4305] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" iface="eth0" netns="/var/run/netns/cni-3f42dd31-0aa0-48b9-29c6-58daab8445f9" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.885 [INFO][4305] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.885 [INFO][4305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.944 [INFO][4313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.944 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.944 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.954 [WARNING][4313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.954 [INFO][4313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.958 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:26.963906 containerd[1472]: 2025-02-13 19:54:26.961 [INFO][4305] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:26.964416 containerd[1472]: time="2025-02-13T19:54:26.964374720Z" level=info msg="TearDown network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\" successfully" Feb 13 19:54:26.964475 containerd[1472]: time="2025-02-13T19:54:26.964454921Z" level=info msg="StopPodSandbox for \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\" returns successfully" Feb 13 19:54:26.964990 kubelet[2525]: E0213 19:54:26.964971 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:26.966953 containerd[1472]: time="2025-02-13T19:54:26.966798182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2m2jz,Uid:eef88bc7-df5c-4812-b65a-e088e32440c4,Namespace:kube-system,Attempt:1,}" Feb 13 19:54:26.968140 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 53996 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:26.969755 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:26.975852 systemd-logind[1456]: New session 14 of user core. Feb 13 19:54:26.980426 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:54:27.088592 systemd-networkd[1403]: cali28c91d8afc4: Link UP Feb 13 19:54:27.088808 systemd-networkd[1403]: cali28c91d8afc4: Gained carrier Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:26.986 [INFO][4337] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7bv58-eth0 coredns-668d6bf9bc- kube-system ce6cf94a-45eb-47eb-acba-4bf09b224c4f 871 0 2025-02-13 19:53:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7bv58 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28c91d8afc4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7bv58" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7bv58-" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:26.986 [INFO][4337] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7bv58" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.027 [INFO][4374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" HandleID="k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.040 [INFO][4374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" HandleID="k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333030), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7bv58", "timestamp":"2025-02-13 19:54:27.027872153 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.041 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.041 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.041 [INFO][4374] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.048 [INFO][4374] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.055 [INFO][4374] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.062 [INFO][4374] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.065 [INFO][4374] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.068 [INFO][4374] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.068 [INFO][4374] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.070 [INFO][4374] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.074 [INFO][4374] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.080 [INFO][4374] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.080 [INFO][4374] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" host="localhost" Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.081 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:27.110227 containerd[1472]: 2025-02-13 19:54:27.081 [INFO][4374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" HandleID="k8s-pod-network.d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:27.110845 containerd[1472]: 2025-02-13 19:54:27.085 [INFO][4337] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7bv58" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7bv58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ce6cf94a-45eb-47eb-acba-4bf09b224c4f", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7bv58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28c91d8afc4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.110845 containerd[1472]: 2025-02-13 19:54:27.085 [INFO][4337] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7bv58" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:27.110845 containerd[1472]: 2025-02-13 19:54:27.085 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28c91d8afc4 ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7bv58" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:27.110845 containerd[1472]: 2025-02-13 19:54:27.088 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7bv58" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:27.110845 containerd[1472]: 2025-02-13 19:54:27.089 [INFO][4337] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7bv58" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7bv58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ce6cf94a-45eb-47eb-acba-4bf09b224c4f", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd", Pod:"coredns-668d6bf9bc-7bv58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28c91d8afc4", MAC:"46:b5:48:85:9f:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.110845 containerd[1472]: 2025-02-13 19:54:27.103 [INFO][4337] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7bv58" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:27.139111 sshd[4360]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:27.143473 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:54:27.143877 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:53996.service: Deactivated successfully. Feb 13 19:54:27.146718 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:54:27.148706 containerd[1472]: time="2025-02-13T19:54:27.148553227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:27.149057 containerd[1472]: time="2025-02-13T19:54:27.148985799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:27.149057 containerd[1472]: time="2025-02-13T19:54:27.149006758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.149184 systemd-logind[1456]: Removed session 14. Feb 13 19:54:27.150374 containerd[1472]: time="2025-02-13T19:54:27.150282224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.169029 systemd[1]: Started cri-containerd-d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd.scope - libcontainer container d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd. Feb 13 19:54:27.179050 systemd-networkd[1403]: cali1910a470cb7: Link UP Feb 13 19:54:27.180021 systemd-networkd[1403]: cali1910a470cb7: Gained carrier Feb 13 19:54:27.182086 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:26.988 [INFO][4333] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0 calico-apiserver-6ffbd469f7- calico-apiserver 0df95c66-53bc-436d-8654-d036e666d8e1 872 0 2025-02-13 19:53:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ffbd469f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6ffbd469f7-rkspr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1910a470cb7 [] []}} ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-rkspr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:26.988 [INFO][4333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-rkspr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.033 [INFO][4389] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" HandleID="k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.052 [INFO][4389] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" HandleID="k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e2be0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6ffbd469f7-rkspr", "timestamp":"2025-02-13 19:54:27.033628797 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.053 [INFO][4389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.081 [INFO][4389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.081 [INFO][4389] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.146 [INFO][4389] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.150 [INFO][4389] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.160 [INFO][4389] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.161 [INFO][4389] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.164 [INFO][4389] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.164 [INFO][4389] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.165 [INFO][4389] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070 Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.168 [INFO][4389] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.173 [INFO][4389] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.173 [INFO][4389] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" host="localhost" Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.173 [INFO][4389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:27.195812 containerd[1472]: 2025-02-13 19:54:27.173 [INFO][4389] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" HandleID="k8s-pod-network.16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:27.196346 containerd[1472]: 2025-02-13 19:54:27.176 [INFO][4333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-rkspr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0", GenerateName:"calico-apiserver-6ffbd469f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df95c66-53bc-436d-8654-d036e666d8e1", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffbd469f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6ffbd469f7-rkspr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1910a470cb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.196346 containerd[1472]: 2025-02-13 19:54:27.176 [INFO][4333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-rkspr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:27.196346 containerd[1472]: 2025-02-13 19:54:27.176 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1910a470cb7 ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-rkspr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:27.196346 containerd[1472]: 2025-02-13 19:54:27.179 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-rkspr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:27.196346 containerd[1472]: 2025-02-13 19:54:27.179 [INFO][4333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-rkspr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0", GenerateName:"calico-apiserver-6ffbd469f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df95c66-53bc-436d-8654-d036e666d8e1", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffbd469f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070", Pod:"calico-apiserver-6ffbd469f7-rkspr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1910a470cb7", MAC:"8a:5c:7c:8a:af:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.196346 containerd[1472]: 2025-02-13 19:54:27.192 [INFO][4333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070" Namespace="calico-apiserver" Pod="calico-apiserver-6ffbd469f7-rkspr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:27.209561 containerd[1472]: time="2025-02-13T19:54:27.209511233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7bv58,Uid:ce6cf94a-45eb-47eb-acba-4bf09b224c4f,Namespace:kube-system,Attempt:1,} returns sandbox id \"d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd\"" Feb 13 19:54:27.210310 kubelet[2525]: E0213 19:54:27.210269 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:27.212200 containerd[1472]: time="2025-02-13T19:54:27.212172682Z" level=info msg="CreateContainer within sandbox \"d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:54:27.223463 containerd[1472]: time="2025-02-13T19:54:27.223151155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:27.223463 containerd[1472]: time="2025-02-13T19:54:27.223221627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:27.223463 containerd[1472]: time="2025-02-13T19:54:27.223236054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.223463 containerd[1472]: time="2025-02-13T19:54:27.223337625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.243926 systemd[1]: Started cri-containerd-16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070.scope - libcontainer container 16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070. Feb 13 19:54:27.263929 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:54:27.265812 containerd[1472]: time="2025-02-13T19:54:27.265755144Z" level=info msg="CreateContainer within sandbox \"d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97a69b467eb042f088e92aa9a668bd2bbc3518b3e4f9ddbb28f09ae685ac8224\"" Feb 13 19:54:27.266591 containerd[1472]: time="2025-02-13T19:54:27.266557441Z" level=info msg="StartContainer for \"97a69b467eb042f088e92aa9a668bd2bbc3518b3e4f9ddbb28f09ae685ac8224\"" Feb 13 19:54:27.302183 systemd-networkd[1403]: cali753154bc71f: Link UP Feb 13 19:54:27.302388 systemd-networkd[1403]: cali753154bc71f: Gained carrier Feb 13 19:54:27.302934 systemd[1]: Started cri-containerd-97a69b467eb042f088e92aa9a668bd2bbc3518b3e4f9ddbb28f09ae685ac8224.scope - libcontainer container 97a69b467eb042f088e92aa9a668bd2bbc3518b3e4f9ddbb28f09ae685ac8224. Feb 13 19:54:27.305677 containerd[1472]: time="2025-02-13T19:54:27.305586535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffbd469f7-rkspr,Uid:0df95c66-53bc-436d-8654-d036e666d8e1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070\"" Feb 13 19:54:27.312084 containerd[1472]: time="2025-02-13T19:54:27.312016173Z" level=info msg="CreateContainer within sandbox \"16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:26.986 [INFO][4318] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0 calico-kube-controllers-764d679f55- calico-system 67b6a979-5b1c-436f-82c6-7e0dec8e8fa4 870 0 2025-02-13 19:53:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:764d679f55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-764d679f55-4zqq8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali753154bc71f [] []}} ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Namespace="calico-system" Pod="calico-kube-controllers-764d679f55-4zqq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:26.987 [INFO][4318] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Namespace="calico-system" Pod="calico-kube-controllers-764d679f55-4zqq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.041 [INFO][4388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" HandleID="k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.054 [INFO][4388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" HandleID="k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004427a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-764d679f55-4zqq8", "timestamp":"2025-02-13 19:54:27.041004853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.054 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.174 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.174 [INFO][4388] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.246 [INFO][4388] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.252 [INFO][4388] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.264 [INFO][4388] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.267 [INFO][4388] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.278 [INFO][4388] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.278 [INFO][4388] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.282 [INFO][4388] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1 Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.285 [INFO][4388] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.294 [INFO][4388] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.295 [INFO][4388] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" host="localhost" Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.295 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:27.317620 containerd[1472]: 2025-02-13 19:54:27.295 [INFO][4388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" HandleID="k8s-pod-network.2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:27.318286 containerd[1472]: 2025-02-13 19:54:27.300 [INFO][4318] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Namespace="calico-system" Pod="calico-kube-controllers-764d679f55-4zqq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0", GenerateName:"calico-kube-controllers-764d679f55-", Namespace:"calico-system", SelfLink:"", UID:"67b6a979-5b1c-436f-82c6-7e0dec8e8fa4", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764d679f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-764d679f55-4zqq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali753154bc71f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.318286 containerd[1472]: 2025-02-13 19:54:27.300 [INFO][4318] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Namespace="calico-system" Pod="calico-kube-controllers-764d679f55-4zqq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:27.318286 containerd[1472]: 2025-02-13 19:54:27.300 [INFO][4318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali753154bc71f ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Namespace="calico-system" Pod="calico-kube-controllers-764d679f55-4zqq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:27.318286 containerd[1472]: 2025-02-13 19:54:27.302 [INFO][4318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Namespace="calico-system" Pod="calico-kube-controllers-764d679f55-4zqq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:27.318286 containerd[1472]: 2025-02-13 19:54:27.303 [INFO][4318] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Namespace="calico-system" Pod="calico-kube-controllers-764d679f55-4zqq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0", GenerateName:"calico-kube-controllers-764d679f55-", Namespace:"calico-system", SelfLink:"", UID:"67b6a979-5b1c-436f-82c6-7e0dec8e8fa4", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764d679f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1", Pod:"calico-kube-controllers-764d679f55-4zqq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali753154bc71f", MAC:"52:90:b4:89:35:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.318286 containerd[1472]: 2025-02-13 19:54:27.314 [INFO][4318] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1" Namespace="calico-system" Pod="calico-kube-controllers-764d679f55-4zqq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:27.336388 containerd[1472]: time="2025-02-13T19:54:27.336176406Z" level=info msg="CreateContainer within sandbox \"16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3a20a330e24748f4772f4067bc2707003394531c9b9bba471972740e0b2f557\"" Feb 13 19:54:27.338250 containerd[1472]: time="2025-02-13T19:54:27.337905344Z" level=info msg="StartContainer for \"f3a20a330e24748f4772f4067bc2707003394531c9b9bba471972740e0b2f557\"" Feb 13 19:54:27.339853 containerd[1472]: time="2025-02-13T19:54:27.339803991Z" level=info msg="StartContainer for \"97a69b467eb042f088e92aa9a668bd2bbc3518b3e4f9ddbb28f09ae685ac8224\" returns successfully" Feb 13 19:54:27.343474 containerd[1472]: time="2025-02-13T19:54:27.343074795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:27.343474 containerd[1472]: time="2025-02-13T19:54:27.343139055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:27.343474 containerd[1472]: time="2025-02-13T19:54:27.343155396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.344319 containerd[1472]: time="2025-02-13T19:54:27.344221068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.364131 systemd[1]: Started cri-containerd-2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1.scope - libcontainer container 2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1. Feb 13 19:54:27.369726 systemd[1]: Started cri-containerd-f3a20a330e24748f4772f4067bc2707003394531c9b9bba471972740e0b2f557.scope - libcontainer container f3a20a330e24748f4772f4067bc2707003394531c9b9bba471972740e0b2f557. Feb 13 19:54:27.381516 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:54:27.393802 systemd-networkd[1403]: caliab9c28569a9: Link UP Feb 13 19:54:27.394101 systemd-networkd[1403]: caliab9c28569a9: Gained carrier Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.050 [INFO][4372] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0 coredns-668d6bf9bc- kube-system eef88bc7-df5c-4812-b65a-e088e32440c4 881 0 2025-02-13 19:53:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2m2jz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab9c28569a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Namespace="kube-system" Pod="coredns-668d6bf9bc-2m2jz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2m2jz-" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.050 [INFO][4372] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Namespace="kube-system" Pod="coredns-668d6bf9bc-2m2jz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.104 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" HandleID="k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.139 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" HandleID="k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a5170), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2m2jz", "timestamp":"2025-02-13 19:54:27.104722643 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.139 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.295 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.295 [INFO][4418] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.345 [INFO][4418] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.351 [INFO][4418] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.362 [INFO][4418] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.364 [INFO][4418] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.367 [INFO][4418] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.367 [INFO][4418] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.369 [INFO][4418] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52 Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.372 [INFO][4418] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.380 [INFO][4418] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.380 [INFO][4418] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" host="localhost" Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.380 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:27.410307 containerd[1472]: 2025-02-13 19:54:27.380 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" HandleID="k8s-pod-network.80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:27.410941 containerd[1472]: 2025-02-13 19:54:27.388 [INFO][4372] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Namespace="kube-system" Pod="coredns-668d6bf9bc-2m2jz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eef88bc7-df5c-4812-b65a-e088e32440c4", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2m2jz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab9c28569a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.410941 containerd[1472]: 2025-02-13 19:54:27.388 [INFO][4372] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Namespace="kube-system" Pod="coredns-668d6bf9bc-2m2jz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:27.410941 containerd[1472]: 2025-02-13 19:54:27.388 [INFO][4372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab9c28569a9 ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Namespace="kube-system" Pod="coredns-668d6bf9bc-2m2jz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:27.410941 containerd[1472]: 2025-02-13 19:54:27.396 [INFO][4372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Namespace="kube-system" Pod="coredns-668d6bf9bc-2m2jz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:27.410941 containerd[1472]: 2025-02-13 19:54:27.397 [INFO][4372] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Namespace="kube-system" Pod="coredns-668d6bf9bc-2m2jz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eef88bc7-df5c-4812-b65a-e088e32440c4", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52", Pod:"coredns-668d6bf9bc-2m2jz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab9c28569a9", MAC:"02:cb:ad:f7:56:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.410941 containerd[1472]: 2025-02-13 19:54:27.406 [INFO][4372] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52" Namespace="kube-system" Pod="coredns-668d6bf9bc-2m2jz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:27.438303 containerd[1472]: time="2025-02-13T19:54:27.438232293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764d679f55-4zqq8,Uid:67b6a979-5b1c-436f-82c6-7e0dec8e8fa4,Namespace:calico-system,Attempt:1,} returns sandbox id \"2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1\"" Feb 13 19:54:27.440439 containerd[1472]: time="2025-02-13T19:54:27.440394214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:54:27.446094 containerd[1472]: time="2025-02-13T19:54:27.445559447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:27.446343 containerd[1472]: time="2025-02-13T19:54:27.446126302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:27.446343 containerd[1472]: time="2025-02-13T19:54:27.446153453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.447887 containerd[1472]: time="2025-02-13T19:54:27.447847254Z" level=info msg="StartContainer for \"f3a20a330e24748f4772f4067bc2707003394531c9b9bba471972740e0b2f557\" returns successfully" Feb 13 19:54:27.448170 containerd[1472]: time="2025-02-13T19:54:27.446858357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.465912 systemd[1]: Started cri-containerd-80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52.scope - libcontainer container 80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52. Feb 13 19:54:27.479521 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:54:27.505312 containerd[1472]: time="2025-02-13T19:54:27.505275761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2m2jz,Uid:eef88bc7-df5c-4812-b65a-e088e32440c4,Namespace:kube-system,Attempt:1,} returns sandbox id \"80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52\"" Feb 13 19:54:27.511637 kubelet[2525]: E0213 19:54:27.511161 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:27.513572 containerd[1472]: time="2025-02-13T19:54:27.513545234Z" level=info msg="CreateContainer within sandbox \"80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:54:27.534802 containerd[1472]: time="2025-02-13T19:54:27.533986031Z" level=info msg="CreateContainer within sandbox \"80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"097b032644f786c833646ef6ea8fc04df661a4530fd83f60b1cd93d9a451ec93\"" Feb 13 19:54:27.535496 containerd[1472]: time="2025-02-13T19:54:27.535457575Z" level=info msg="StartContainer for \"097b032644f786c833646ef6ea8fc04df661a4530fd83f60b1cd93d9a451ec93\"" Feb 13 19:54:27.582253 systemd[1]: Started cri-containerd-097b032644f786c833646ef6ea8fc04df661a4530fd83f60b1cd93d9a451ec93.scope - libcontainer container 097b032644f786c833646ef6ea8fc04df661a4530fd83f60b1cd93d9a451ec93. Feb 13 19:54:27.593961 systemd[1]: run-netns-cni\x2d3f42dd31\x2d0aa0\x2d48b9\x2d29c6\x2d58daab8445f9.mount: Deactivated successfully. Feb 13 19:54:27.619951 containerd[1472]: time="2025-02-13T19:54:27.619909864Z" level=info msg="StartContainer for \"097b032644f786c833646ef6ea8fc04df661a4530fd83f60b1cd93d9a451ec93\" returns successfully" Feb 13 19:54:27.638793 containerd[1472]: time="2025-02-13T19:54:27.636822644Z" level=info msg="StopPodSandbox for \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\"" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.685 [INFO][4774] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.685 [INFO][4774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" iface="eth0" netns="/var/run/netns/cni-d8bad6f4-b520-b047-2c7d-badd28954d33" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.686 [INFO][4774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" iface="eth0" netns="/var/run/netns/cni-d8bad6f4-b520-b047-2c7d-badd28954d33" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.686 [INFO][4774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" iface="eth0" netns="/var/run/netns/cni-d8bad6f4-b520-b047-2c7d-badd28954d33" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.686 [INFO][4774] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.686 [INFO][4774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.708 [INFO][4785] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.708 [INFO][4785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.708 [INFO][4785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.712 [WARNING][4785] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.712 [INFO][4785] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.714 [INFO][4785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:27.722933 containerd[1472]: 2025-02-13 19:54:27.717 [INFO][4774] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:27.722933 containerd[1472]: time="2025-02-13T19:54:27.722893293Z" level=info msg="TearDown network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\" successfully" Feb 13 19:54:27.722933 containerd[1472]: time="2025-02-13T19:54:27.722920674Z" level=info msg="StopPodSandbox for \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\" returns successfully" Feb 13 19:54:27.724150 containerd[1472]: time="2025-02-13T19:54:27.723655905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rtnwd,Uid:a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6,Namespace:calico-system,Attempt:1,}" Feb 13 19:54:27.723007 systemd[1]: run-netns-cni\x2dd8bad6f4\x2db520\x2db047\x2d2c7d\x2dbadd28954d33.mount: Deactivated successfully. Feb 13 19:54:27.769675 kubelet[2525]: E0213 19:54:27.769252 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:27.774403 kubelet[2525]: E0213 19:54:27.773672 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:27.796801 kubelet[2525]: I0213 19:54:27.795686 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7bv58" podStartSLOduration=37.79567036 podStartE2EDuration="37.79567036s" podCreationTimestamp="2025-02-13 19:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:54:27.783237717 +0000 UTC m=+44.230902420" watchObservedRunningTime="2025-02-13 19:54:27.79567036 +0000 UTC m=+44.243335064" Feb 13 19:54:27.796801 kubelet[2525]: I0213 19:54:27.795921 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2m2jz" podStartSLOduration=37.795915752 podStartE2EDuration="37.795915752s" podCreationTimestamp="2025-02-13 19:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:54:27.794237509 +0000 UTC m=+44.241902212" watchObservedRunningTime="2025-02-13 19:54:27.795915752 +0000 UTC m=+44.243580445" Feb 13 19:54:27.957171 systemd-networkd[1403]: cali3a5f05a33b3: Link UP Feb 13 19:54:27.957394 systemd-networkd[1403]: cali3a5f05a33b3: Gained carrier Feb 13 19:54:27.970049 kubelet[2525]: I0213 19:54:27.968866 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ffbd469f7-rkspr" podStartSLOduration=29.968849839 podStartE2EDuration="29.968849839s" podCreationTimestamp="2025-02-13 19:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:54:27.827460287 +0000 UTC m=+44.275124990" watchObservedRunningTime="2025-02-13 19:54:27.968849839 +0000 UTC m=+44.416514542" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.768 [INFO][4792] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rtnwd-eth0 csi-node-driver- calico-system a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6 930 0 2025-02-13 19:53:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rtnwd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3a5f05a33b3 [] []}} ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Namespace="calico-system" Pod="csi-node-driver-rtnwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rtnwd-" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.769 [INFO][4792] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Namespace="calico-system" Pod="csi-node-driver-rtnwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.819 [INFO][4805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" HandleID="k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.929 [INFO][4805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" HandleID="k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dce50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rtnwd", "timestamp":"2025-02-13 19:54:27.815861269 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.929 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.929 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.929 [INFO][4805] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.931 [INFO][4805] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.935 [INFO][4805] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.939 [INFO][4805] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.940 [INFO][4805] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.942 [INFO][4805] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.942 [INFO][4805] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.943 [INFO][4805] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.947 [INFO][4805] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.951 [INFO][4805] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.951 [INFO][4805] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" host="localhost" Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.951 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:27.972474 containerd[1472]: 2025-02-13 19:54:27.951 [INFO][4805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" HandleID="k8s-pod-network.8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.973637 containerd[1472]: 2025-02-13 19:54:27.954 [INFO][4792] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Namespace="calico-system" Pod="csi-node-driver-rtnwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rtnwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rtnwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rtnwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a5f05a33b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.973637 containerd[1472]: 2025-02-13 19:54:27.954 [INFO][4792] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Namespace="calico-system" Pod="csi-node-driver-rtnwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.973637 containerd[1472]: 2025-02-13 19:54:27.955 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a5f05a33b3 ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Namespace="calico-system" Pod="csi-node-driver-rtnwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.973637 containerd[1472]: 2025-02-13 19:54:27.957 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Namespace="calico-system" Pod="csi-node-driver-rtnwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.973637 containerd[1472]: 2025-02-13 19:54:27.959 [INFO][4792] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Namespace="calico-system" Pod="csi-node-driver-rtnwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rtnwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rtnwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a", Pod:"csi-node-driver-rtnwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a5f05a33b3", MAC:"d6:bc:45:6d:61:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:27.973637 containerd[1472]: 2025-02-13 19:54:27.968 [INFO][4792] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a" Namespace="calico-system" Pod="csi-node-driver-rtnwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:27.993932 containerd[1472]: time="2025-02-13T19:54:27.993765832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:27.993932 containerd[1472]: time="2025-02-13T19:54:27.993883454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:27.993932 containerd[1472]: time="2025-02-13T19:54:27.993899143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:27.994079 containerd[1472]: time="2025-02-13T19:54:27.993974665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:28.012974 systemd[1]: Started cri-containerd-8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a.scope - libcontainer container 8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a. Feb 13 19:54:28.026927 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:54:28.039604 containerd[1472]: time="2025-02-13T19:54:28.039560107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rtnwd,Uid:a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6,Namespace:calico-system,Attempt:1,} returns sandbox id \"8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a\"" Feb 13 19:54:28.361609 systemd-networkd[1403]: cali28c91d8afc4: Gained IPv6LL Feb 13 19:54:28.552992 systemd-networkd[1403]: cali1910a470cb7: Gained IPv6LL Feb 13 19:54:28.617004 systemd-networkd[1403]: cali753154bc71f: Gained IPv6LL Feb 13 19:54:28.744940 systemd-networkd[1403]: caliab9c28569a9: Gained IPv6LL Feb 13 19:54:28.794181 kubelet[2525]: E0213 19:54:28.794140 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:28.794317 kubelet[2525]: E0213 19:54:28.794291 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:29.064978 systemd-networkd[1403]: cali3a5f05a33b3: Gained IPv6LL Feb 13 19:54:29.796021 kubelet[2525]: E0213 19:54:29.795811 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:29.796021 kubelet[2525]: E0213 19:54:29.795915 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:29.885154 containerd[1472]: time="2025-02-13T19:54:29.885098454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:29.925973 containerd[1472]: time="2025-02-13T19:54:29.925907766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:54:29.978757 containerd[1472]: time="2025-02-13T19:54:29.978696848Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:30.030435 containerd[1472]: time="2025-02-13T19:54:30.030383134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:30.030938 containerd[1472]: time="2025-02-13T19:54:30.030888312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.590298681s" Feb 13 19:54:30.030938 containerd[1472]: time="2025-02-13T19:54:30.030927295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:54:30.031956 containerd[1472]: time="2025-02-13T19:54:30.031926242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:54:30.039230 containerd[1472]: time="2025-02-13T19:54:30.039193568Z" level=info msg="CreateContainer within sandbox \"2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:54:30.096667 containerd[1472]: time="2025-02-13T19:54:30.096527746Z" level=info msg="CreateContainer within sandbox \"2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"53c53a3b81fb7ec92174fe38fd3605b0692dc535925df6d37ecdcefb71fdd044\"" Feb 13 19:54:30.097325 containerd[1472]: time="2025-02-13T19:54:30.097191894Z" level=info msg="StartContainer for \"53c53a3b81fb7ec92174fe38fd3605b0692dc535925df6d37ecdcefb71fdd044\"" Feb 13 19:54:30.124945 systemd[1]: Started cri-containerd-53c53a3b81fb7ec92174fe38fd3605b0692dc535925df6d37ecdcefb71fdd044.scope - libcontainer container 53c53a3b81fb7ec92174fe38fd3605b0692dc535925df6d37ecdcefb71fdd044. Feb 13 19:54:30.167518 containerd[1472]: time="2025-02-13T19:54:30.167465400Z" level=info msg="StartContainer for \"53c53a3b81fb7ec92174fe38fd3605b0692dc535925df6d37ecdcefb71fdd044\" returns successfully" Feb 13 19:54:30.815861 kubelet[2525]: I0213 19:54:30.815671 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-764d679f55-4zqq8" podStartSLOduration=30.223804792 podStartE2EDuration="32.815647743s" podCreationTimestamp="2025-02-13 19:53:58 +0000 UTC" firstStartedPulling="2025-02-13 19:54:27.439953977 +0000 UTC m=+43.887618680" lastFinishedPulling="2025-02-13 19:54:30.031796928 +0000 UTC m=+46.479461631" observedRunningTime="2025-02-13 19:54:30.81542732 +0000 UTC m=+47.263092023" watchObservedRunningTime="2025-02-13 19:54:30.815647743 +0000 UTC m=+47.263312447" Feb 13 19:54:32.152891 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:54012.service - OpenSSH per-connection server daemon (10.0.0.1:54012). Feb 13 19:54:32.194093 sshd[4954]: Accepted publickey for core from 10.0.0.1 port 54012 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:32.196191 sshd[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:32.200971 systemd-logind[1456]: New session 15 of user core. Feb 13 19:54:32.206986 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:54:32.246958 containerd[1472]: time="2025-02-13T19:54:32.246902664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:32.279922 containerd[1472]: time="2025-02-13T19:54:32.279311415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:54:32.348538 sshd[4954]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:32.350961 containerd[1472]: time="2025-02-13T19:54:32.350909777Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:32.352457 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:54012.service: Deactivated successfully. Feb 13 19:54:32.355132 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:54:32.358047 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:54:32.359380 systemd-logind[1456]: Removed session 15. Feb 13 19:54:32.368855 containerd[1472]: time="2025-02-13T19:54:32.368758600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:32.369625 containerd[1472]: time="2025-02-13T19:54:32.369582327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.337622501s" Feb 13 19:54:32.369625 containerd[1472]: time="2025-02-13T19:54:32.369612683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:54:32.371887 containerd[1472]: time="2025-02-13T19:54:32.371853611Z" level=info msg="CreateContainer within sandbox \"8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:54:33.175062 containerd[1472]: time="2025-02-13T19:54:33.175018774Z" level=info msg="CreateContainer within sandbox \"8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6891b6f39293370d2033fce241c9cf6263afa0996905d2c4d8f9e3c61da5bac1\"" Feb 13 19:54:33.175574 containerd[1472]: time="2025-02-13T19:54:33.175535294Z" level=info msg="StartContainer for \"6891b6f39293370d2033fce241c9cf6263afa0996905d2c4d8f9e3c61da5bac1\"" Feb 13 19:54:33.204914 systemd[1]: Started cri-containerd-6891b6f39293370d2033fce241c9cf6263afa0996905d2c4d8f9e3c61da5bac1.scope - libcontainer container 6891b6f39293370d2033fce241c9cf6263afa0996905d2c4d8f9e3c61da5bac1. Feb 13 19:54:33.329282 containerd[1472]: time="2025-02-13T19:54:33.329226452Z" level=info msg="StartContainer for \"6891b6f39293370d2033fce241c9cf6263afa0996905d2c4d8f9e3c61da5bac1\" returns successfully" Feb 13 19:54:33.330494 containerd[1472]: time="2025-02-13T19:54:33.330443968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:54:34.944353 containerd[1472]: time="2025-02-13T19:54:34.944297120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:34.945082 containerd[1472]: time="2025-02-13T19:54:34.945027430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:54:34.946059 containerd[1472]: time="2025-02-13T19:54:34.946015205Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:34.948218 containerd[1472]: time="2025-02-13T19:54:34.948181221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:34.948735 containerd[1472]: time="2025-02-13T19:54:34.948697580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.618222163s" Feb 13 19:54:34.948759 containerd[1472]: time="2025-02-13T19:54:34.948737134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:54:34.950973 containerd[1472]: time="2025-02-13T19:54:34.950944078Z" level=info msg="CreateContainer within sandbox \"8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:54:34.963471 containerd[1472]: time="2025-02-13T19:54:34.963429166Z" level=info msg="CreateContainer within sandbox \"8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3bffcb4c7d06248e8b98a4ca7ffb4ae57b709354e7640c818df865e04b67278e\"" Feb 13 19:54:34.964088 containerd[1472]: time="2025-02-13T19:54:34.963992143Z" level=info msg="StartContainer for \"3bffcb4c7d06248e8b98a4ca7ffb4ae57b709354e7640c818df865e04b67278e\"" Feb 13 19:54:35.003449 systemd[1]: Started cri-containerd-3bffcb4c7d06248e8b98a4ca7ffb4ae57b709354e7640c818df865e04b67278e.scope - libcontainer container 3bffcb4c7d06248e8b98a4ca7ffb4ae57b709354e7640c818df865e04b67278e. Feb 13 19:54:35.035803 containerd[1472]: time="2025-02-13T19:54:35.035727810Z" level=info msg="StartContainer for \"3bffcb4c7d06248e8b98a4ca7ffb4ae57b709354e7640c818df865e04b67278e\" returns successfully" Feb 13 19:54:35.690462 kubelet[2525]: I0213 19:54:35.690411 2525 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:54:35.690462 kubelet[2525]: I0213 19:54:35.690446 2525 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:54:35.823444 kubelet[2525]: I0213 19:54:35.823332 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rtnwd" podStartSLOduration=30.916566733 podStartE2EDuration="37.823316892s" podCreationTimestamp="2025-02-13 19:53:58 +0000 UTC" firstStartedPulling="2025-02-13 19:54:28.042872508 +0000 UTC m=+44.490537211" lastFinishedPulling="2025-02-13 19:54:34.949622667 +0000 UTC m=+51.397287370" observedRunningTime="2025-02-13 19:54:35.822705805 +0000 UTC m=+52.270370528" watchObservedRunningTime="2025-02-13 19:54:35.823316892 +0000 UTC m=+52.270981595" Feb 13 19:54:37.362861 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:50752.service - OpenSSH per-connection server daemon (10.0.0.1:50752). Feb 13 19:54:37.406221 sshd[5056]: Accepted publickey for core from 10.0.0.1 port 50752 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:37.407969 sshd[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:37.411822 systemd-logind[1456]: New session 16 of user core. Feb 13 19:54:37.418898 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:54:37.539862 sshd[5056]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:37.544406 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:50752.service: Deactivated successfully. Feb 13 19:54:37.547290 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:54:37.548015 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:54:37.548982 systemd-logind[1456]: Removed session 16. Feb 13 19:54:42.552640 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:50764.service - OpenSSH per-connection server daemon (10.0.0.1:50764). Feb 13 19:54:42.593283 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 50764 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:42.595101 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:42.599265 systemd-logind[1456]: New session 17 of user core. Feb 13 19:54:42.607934 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:54:42.716301 sshd[5079]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:42.721421 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:50764.service: Deactivated successfully. Feb 13 19:54:42.723525 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:54:42.724093 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:54:42.725259 systemd-logind[1456]: Removed session 17. Feb 13 19:54:43.625519 containerd[1472]: time="2025-02-13T19:54:43.625481158Z" level=info msg="StopPodSandbox for \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\"" Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.714 [WARNING][5108] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7bv58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ce6cf94a-45eb-47eb-acba-4bf09b224c4f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd", Pod:"coredns-668d6bf9bc-7bv58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28c91d8afc4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.714 [INFO][5108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.714 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" iface="eth0" netns="" Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.714 [INFO][5108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.714 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.733 [INFO][5117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.733 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.733 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.737 [WARNING][5117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.737 [INFO][5117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.738 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:43.743145 containerd[1472]: 2025-02-13 19:54:43.740 [INFO][5108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:43.743553 containerd[1472]: time="2025-02-13T19:54:43.743176678Z" level=info msg="TearDown network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\" successfully" Feb 13 19:54:43.743553 containerd[1472]: time="2025-02-13T19:54:43.743200092Z" level=info msg="StopPodSandbox for \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\" returns successfully" Feb 13 19:54:43.750349 containerd[1472]: time="2025-02-13T19:54:43.750304683Z" level=info msg="RemovePodSandbox for \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\"" Feb 13 19:54:43.752480 containerd[1472]: time="2025-02-13T19:54:43.752451641Z" level=info msg="Forcibly stopping sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\"" Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.783 [WARNING][5139] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7bv58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ce6cf94a-45eb-47eb-acba-4bf09b224c4f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d254bddea79f3c7d6eff43ff9f4d8568625bcfb0e6712f10b129ba939e8c54fd", Pod:"coredns-668d6bf9bc-7bv58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28c91d8afc4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.783 [INFO][5139] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.783 [INFO][5139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" iface="eth0" netns="" Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.783 [INFO][5139] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.783 [INFO][5139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.802 [INFO][5146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.802 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.802 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.806 [WARNING][5146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.806 [INFO][5146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" HandleID="k8s-pod-network.534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Workload="localhost-k8s-coredns--668d6bf9bc--7bv58-eth0" Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.808 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:43.812614 containerd[1472]: 2025-02-13 19:54:43.810 [INFO][5139] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768" Feb 13 19:54:43.813109 containerd[1472]: time="2025-02-13T19:54:43.812651705Z" level=info msg="TearDown network for sandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\" successfully" Feb 13 19:54:44.656240 containerd[1472]: time="2025-02-13T19:54:44.656179584Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:54:44.656689 containerd[1472]: time="2025-02-13T19:54:44.656264964Z" level=info msg="RemovePodSandbox \"534d3068ba5a6ccaff248a18a7739d4a6a140c4e30e989f7dbe0fd1e5203b768\" returns successfully" Feb 13 19:54:44.656770 containerd[1472]: time="2025-02-13T19:54:44.656746077Z" level=info msg="StopPodSandbox for \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\"" Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.689 [WARNING][5169] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eef88bc7-df5c-4812-b65a-e088e32440c4", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52", Pod:"coredns-668d6bf9bc-2m2jz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab9c28569a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.690 [INFO][5169] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.690 [INFO][5169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" iface="eth0" netns="" Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.690 [INFO][5169] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.690 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.710 [INFO][5177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.710 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.710 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.715 [WARNING][5177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.715 [INFO][5177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.716 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:44.720766 containerd[1472]: 2025-02-13 19:54:44.718 [INFO][5169] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:44.721526 containerd[1472]: time="2025-02-13T19:54:44.720817811Z" level=info msg="TearDown network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\" successfully" Feb 13 19:54:44.721526 containerd[1472]: time="2025-02-13T19:54:44.720848378Z" level=info msg="StopPodSandbox for \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\" returns successfully" Feb 13 19:54:44.721526 containerd[1472]: time="2025-02-13T19:54:44.721313901Z" level=info msg="RemovePodSandbox for \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\"" Feb 13 19:54:44.721526 containerd[1472]: time="2025-02-13T19:54:44.721348646Z" level=info msg="Forcibly stopping sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\"" Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.753 [WARNING][5199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eef88bc7-df5c-4812-b65a-e088e32440c4", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80fd02a24615c10025627bcc770b37bfd12978482e3db060b99d73e8cfb2ef52", Pod:"coredns-668d6bf9bc-2m2jz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab9c28569a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.753 [INFO][5199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.753 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" iface="eth0" netns="" Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.753 [INFO][5199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.753 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.771 [INFO][5206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.771 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.771 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.776 [WARNING][5206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.776 [INFO][5206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" HandleID="k8s-pod-network.bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Workload="localhost-k8s-coredns--668d6bf9bc--2m2jz-eth0" Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.777 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:44.781775 containerd[1472]: 2025-02-13 19:54:44.779 [INFO][5199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3" Feb 13 19:54:44.782200 containerd[1472]: time="2025-02-13T19:54:44.781836736Z" level=info msg="TearDown network for sandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\" successfully" Feb 13 19:54:44.960738 containerd[1472]: time="2025-02-13T19:54:44.960618200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:54:44.960738 containerd[1472]: time="2025-02-13T19:54:44.960693190Z" level=info msg="RemovePodSandbox \"bb1397ef6c013609eabffe5356b004e048457bda3aef34ef482fb75b66b8c5e3\" returns successfully" Feb 13 19:54:44.961343 containerd[1472]: time="2025-02-13T19:54:44.961073554Z" level=info msg="StopPodSandbox for \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\"" Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:44.992 [WARNING][5229] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rtnwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a", Pod:"csi-node-driver-rtnwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a5f05a33b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:44.992 [INFO][5229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:44.992 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" iface="eth0" netns="" Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:44.992 [INFO][5229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:44.992 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:45.010 [INFO][5236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:45.010 [INFO][5236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:45.010 [INFO][5236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:45.014 [WARNING][5236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:45.014 [INFO][5236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:45.015 [INFO][5236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:45.019970 containerd[1472]: 2025-02-13 19:54:45.017 [INFO][5229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:45.020383 containerd[1472]: time="2025-02-13T19:54:45.020012034Z" level=info msg="TearDown network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\" successfully" Feb 13 19:54:45.020383 containerd[1472]: time="2025-02-13T19:54:45.020036079Z" level=info msg="StopPodSandbox for \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\" returns successfully" Feb 13 19:54:45.020546 containerd[1472]: time="2025-02-13T19:54:45.020523204Z" level=info msg="RemovePodSandbox for \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\"" Feb 13 19:54:45.020579 containerd[1472]: time="2025-02-13T19:54:45.020550054Z" level=info msg="Forcibly stopping sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\"" Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.055 [WARNING][5259] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rtnwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a35aff9a-f3a6-44d2-8ee2-7a8e5db0f8d6", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d7d2150b507597000471dbd420a1ab2f8429c63d05e4a2b9531ce17c3e9550a", Pod:"csi-node-driver-rtnwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a5f05a33b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.055 [INFO][5259] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.055 [INFO][5259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" iface="eth0" netns="" Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.055 [INFO][5259] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.055 [INFO][5259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.072 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.072 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.072 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.077 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.077 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" HandleID="k8s-pod-network.9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Workload="localhost-k8s-csi--node--driver--rtnwd-eth0" Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.078 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:45.082726 containerd[1472]: 2025-02-13 19:54:45.080 [INFO][5259] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95" Feb 13 19:54:45.083145 containerd[1472]: time="2025-02-13T19:54:45.082763937Z" level=info msg="TearDown network for sandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\" successfully" Feb 13 19:54:45.194219 containerd[1472]: time="2025-02-13T19:54:45.194163569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:54:45.194219 containerd[1472]: time="2025-02-13T19:54:45.194226968Z" level=info msg="RemovePodSandbox \"9d8150b6a8ff559c377def30d080039a0754257a57c146f4f374853ee1072a95\" returns successfully" Feb 13 19:54:45.194623 containerd[1472]: time="2025-02-13T19:54:45.194591602Z" level=info msg="StopPodSandbox for \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\"" Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.224 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0", GenerateName:"calico-apiserver-6ffbd469f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"26630c90-52b3-480e-9c9a-098510701036", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffbd469f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4", Pod:"calico-apiserver-6ffbd469f7-5n427", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6a3a7c32a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.224 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.224 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" iface="eth0" netns="" Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.225 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.225 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.242 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.242 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.242 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.247 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.247 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.248 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:45.252948 containerd[1472]: 2025-02-13 19:54:45.250 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:45.252948 containerd[1472]: time="2025-02-13T19:54:45.252920987Z" level=info msg="TearDown network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\" successfully" Feb 13 19:54:45.252948 containerd[1472]: time="2025-02-13T19:54:45.252943219Z" level=info msg="StopPodSandbox for \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\" returns successfully" Feb 13 19:54:45.253415 containerd[1472]: time="2025-02-13T19:54:45.253328481Z" level=info msg="RemovePodSandbox for \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\"" Feb 13 19:54:45.253415 containerd[1472]: time="2025-02-13T19:54:45.253349541Z" level=info msg="Forcibly stopping sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\"" Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.495 [WARNING][5318] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0", GenerateName:"calico-apiserver-6ffbd469f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"26630c90-52b3-480e-9c9a-098510701036", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffbd469f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a525e518fe9d339b798e6e2600ab4e25849d4510a7adf6a49b2e479253f8bbb4", Pod:"calico-apiserver-6ffbd469f7-5n427", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6a3a7c32a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.495 [INFO][5318] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.495 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" iface="eth0" netns="" Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.495 [INFO][5318] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.495 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.515 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.515 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.515 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.520 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.520 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" HandleID="k8s-pod-network.2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--5n427-eth0" Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.521 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:45.527818 containerd[1472]: 2025-02-13 19:54:45.523 [INFO][5318] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5" Feb 13 19:54:45.527818 containerd[1472]: time="2025-02-13T19:54:45.527784710Z" level=info msg="TearDown network for sandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\" successfully" Feb 13 19:54:45.745257 containerd[1472]: time="2025-02-13T19:54:45.745189332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:54:45.745634 containerd[1472]: time="2025-02-13T19:54:45.745268992Z" level=info msg="RemovePodSandbox \"2559d86bc329bf44e7227295ef03a56b323b3650b9cfe405c7af129ce1ed33f5\" returns successfully" Feb 13 19:54:45.745769 containerd[1472]: time="2025-02-13T19:54:45.745722051Z" level=info msg="StopPodSandbox for \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\"" Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.814 [WARNING][5348] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0", GenerateName:"calico-apiserver-6ffbd469f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df95c66-53bc-436d-8654-d036e666d8e1", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffbd469f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070", Pod:"calico-apiserver-6ffbd469f7-rkspr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1910a470cb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.815 [INFO][5348] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.815 [INFO][5348] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" iface="eth0" netns="" Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.815 [INFO][5348] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.815 [INFO][5348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.839 [INFO][5355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.839 [INFO][5355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.839 [INFO][5355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.844 [WARNING][5355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.844 [INFO][5355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.845 [INFO][5355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:45.850216 containerd[1472]: 2025-02-13 19:54:45.848 [INFO][5348] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:45.850620 containerd[1472]: time="2025-02-13T19:54:45.850239479Z" level=info msg="TearDown network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\" successfully" Feb 13 19:54:45.850620 containerd[1472]: time="2025-02-13T19:54:45.850263274Z" level=info msg="StopPodSandbox for \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\" returns successfully" Feb 13 19:54:45.850620 containerd[1472]: time="2025-02-13T19:54:45.850536477Z" level=info msg="RemovePodSandbox for \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\"" Feb 13 19:54:45.850620 containerd[1472]: time="2025-02-13T19:54:45.850566353Z" level=info msg="Forcibly stopping sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\"" Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.021 [WARNING][5380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0", GenerateName:"calico-apiserver-6ffbd469f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df95c66-53bc-436d-8654-d036e666d8e1", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffbd469f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16eead3ffc4c752dfe27e9c1f1fe075ebdc6dc83a52f714ca0643e54f7e4d070", Pod:"calico-apiserver-6ffbd469f7-rkspr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1910a470cb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.021 [INFO][5380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.021 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" iface="eth0" netns="" Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.021 [INFO][5380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.021 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.041 [INFO][5387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.041 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.041 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.046 [WARNING][5387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.046 [INFO][5387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" HandleID="k8s-pod-network.d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Workload="localhost-k8s-calico--apiserver--6ffbd469f7--rkspr-eth0" Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.047 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:46.051807 containerd[1472]: 2025-02-13 19:54:46.049 [INFO][5380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac" Feb 13 19:54:46.052274 containerd[1472]: time="2025-02-13T19:54:46.051855804Z" level=info msg="TearDown network for sandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\" successfully" Feb 13 19:54:46.073421 containerd[1472]: time="2025-02-13T19:54:46.073376232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:54:46.073500 containerd[1472]: time="2025-02-13T19:54:46.073441515Z" level=info msg="RemovePodSandbox \"d0a75b9e322da9de6f3afe17208527a08a6a0833a2791eb6420366b68a6867ac\" returns successfully" Feb 13 19:54:46.074917 containerd[1472]: time="2025-02-13T19:54:46.073942965Z" level=info msg="StopPodSandbox for \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\"" Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.119 [WARNING][5410] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0", GenerateName:"calico-kube-controllers-764d679f55-", Namespace:"calico-system", SelfLink:"", UID:"67b6a979-5b1c-436f-82c6-7e0dec8e8fa4", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764d679f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1", Pod:"calico-kube-controllers-764d679f55-4zqq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali753154bc71f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.119 [INFO][5410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.119 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" iface="eth0" netns="" Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.119 [INFO][5410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.119 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.142 [INFO][5417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.142 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.142 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.148 [WARNING][5417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.148 [INFO][5417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.149 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:46.154029 containerd[1472]: 2025-02-13 19:54:46.151 [INFO][5410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:46.154029 containerd[1472]: time="2025-02-13T19:54:46.153981599Z" level=info msg="TearDown network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\" successfully" Feb 13 19:54:46.154029 containerd[1472]: time="2025-02-13T19:54:46.154010643Z" level=info msg="StopPodSandbox for \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\" returns successfully" Feb 13 19:54:46.154742 containerd[1472]: time="2025-02-13T19:54:46.154453604Z" level=info msg="RemovePodSandbox for \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\"" Feb 13 19:54:46.154742 containerd[1472]: time="2025-02-13T19:54:46.154477430Z" level=info msg="Forcibly stopping sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\"" Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.187 [WARNING][5440] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0", GenerateName:"calico-kube-controllers-764d679f55-", Namespace:"calico-system", SelfLink:"", UID:"67b6a979-5b1c-436f-82c6-7e0dec8e8fa4", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764d679f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fa8462fd9893c6e498aa61bd0804a2240cc240e67e3bf98854ee52d9efed1c1", Pod:"calico-kube-controllers-764d679f55-4zqq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali753154bc71f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.187 [INFO][5440] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.187 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" iface="eth0" netns="" Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.187 [INFO][5440] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.187 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.205 [INFO][5447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.205 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.205 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.210 [WARNING][5447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.210 [INFO][5447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" HandleID="k8s-pod-network.c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Workload="localhost-k8s-calico--kube--controllers--764d679f55--4zqq8-eth0" Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.211 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:46.215402 containerd[1472]: 2025-02-13 19:54:46.213 [INFO][5440] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3" Feb 13 19:54:46.215868 containerd[1472]: time="2025-02-13T19:54:46.215414453Z" level=info msg="TearDown network for sandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\" successfully" Feb 13 19:54:46.351548 containerd[1472]: time="2025-02-13T19:54:46.351497961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:54:46.351548 containerd[1472]: time="2025-02-13T19:54:46.351553916Z" level=info msg="RemovePodSandbox \"c745ca6b389ae1f7a84a030fe1bc0daff09ec3f56ac463c9e2e5fecef3c82cc3\" returns successfully" Feb 13 19:54:47.730623 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:60042.service - OpenSSH per-connection server daemon (10.0.0.1:60042). Feb 13 19:54:47.773593 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 60042 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:47.775258 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:47.779287 systemd-logind[1456]: New session 18 of user core. Feb 13 19:54:47.786917 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:54:47.906688 sshd[5455]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:47.910424 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:60042.service: Deactivated successfully. Feb 13 19:54:47.912645 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:54:47.913332 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:54:47.914328 systemd-logind[1456]: Removed session 18. Feb 13 19:54:50.760017 systemd[1]: run-containerd-runc-k8s.io-178c3ce04dbe109e03e201ad335e5b4cab17dd673a8470b9f5267fa3b6bf402e-runc.sIuPjn.mount: Deactivated successfully. Feb 13 19:54:50.803638 kubelet[2525]: E0213 19:54:50.803587 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:52.921792 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:60044.service - OpenSSH per-connection server daemon (10.0.0.1:60044). Feb 13 19:54:52.959075 sshd[5497]: Accepted publickey for core from 10.0.0.1 port 60044 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:52.960569 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:52.964325 systemd-logind[1456]: New session 19 of user core. Feb 13 19:54:52.974958 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:54:53.085964 sshd[5497]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:53.098530 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:60044.service: Deactivated successfully. Feb 13 19:54:53.100238 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:54:53.101971 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:54:53.108045 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:60060.service - OpenSSH per-connection server daemon (10.0.0.1:60060). Feb 13 19:54:53.109102 systemd-logind[1456]: Removed session 19. Feb 13 19:54:53.141687 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 60060 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:53.143171 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:53.147222 systemd-logind[1456]: New session 20 of user core. Feb 13 19:54:53.152050 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:54:53.353653 sshd[5512]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:53.360971 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:60060.service: Deactivated successfully. Feb 13 19:54:53.363257 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:54:53.365033 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:54:53.373633 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:60072.service - OpenSSH per-connection server daemon (10.0.0.1:60072). Feb 13 19:54:53.375154 systemd-logind[1456]: Removed session 20. Feb 13 19:54:53.410119 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 60072 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:53.411729 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:53.412842 kubelet[2525]: I0213 19:54:53.412686 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:54:53.419692 systemd-logind[1456]: New session 21 of user core. Feb 13 19:54:53.423078 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:54:54.306156 sshd[5526]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:54.317131 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:60072.service: Deactivated successfully. Feb 13 19:54:54.319515 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:54:54.320310 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:54:54.327543 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:60084.service - OpenSSH per-connection server daemon (10.0.0.1:60084). Feb 13 19:54:54.328557 systemd-logind[1456]: Removed session 21. Feb 13 19:54:54.375645 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 60084 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:54.377315 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:54.381306 systemd-logind[1456]: New session 22 of user core. Feb 13 19:54:54.399929 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:54:54.612661 sshd[5551]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:54.621985 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:60084.service: Deactivated successfully. Feb 13 19:54:54.623950 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:54:54.625597 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:54:54.633519 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:60094.service - OpenSSH per-connection server daemon (10.0.0.1:60094). Feb 13 19:54:54.634406 systemd-logind[1456]: Removed session 22. Feb 13 19:54:54.668957 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 60094 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:54.670586 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:54.674482 systemd-logind[1456]: New session 23 of user core. Feb 13 19:54:54.686900 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:54:54.803614 sshd[5563]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:54.808239 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:60094.service: Deactivated successfully. Feb 13 19:54:54.810233 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:54:54.810990 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:54:54.812009 systemd-logind[1456]: Removed session 23. Feb 13 19:54:56.636058 kubelet[2525]: E0213 19:54:56.636020 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:59.820265 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:41618.service - OpenSSH per-connection server daemon (10.0.0.1:41618). Feb 13 19:54:59.857850 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 41618 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:54:59.859432 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:59.863639 systemd-logind[1456]: New session 24 of user core. Feb 13 19:54:59.871905 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:54:59.982555 sshd[5584]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:59.986752 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:41618.service: Deactivated successfully. Feb 13 19:54:59.988844 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:54:59.989516 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:54:59.990337 systemd-logind[1456]: Removed session 24. Feb 13 19:55:03.636331 kubelet[2525]: E0213 19:55:03.636286 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:55:04.993854 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:41622.service - OpenSSH per-connection server daemon (10.0.0.1:41622). Feb 13 19:55:05.032406 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 41622 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:55:05.033888 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:05.037473 systemd-logind[1456]: New session 25 of user core. Feb 13 19:55:05.044889 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:55:05.149206 sshd[5624]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:05.153527 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:41622.service: Deactivated successfully. Feb 13 19:55:05.155687 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:55:05.156371 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:55:05.157237 systemd-logind[1456]: Removed session 25. Feb 13 19:55:08.635765 kubelet[2525]: E0213 19:55:08.635725 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:55:10.164720 systemd[1]: Started sshd@25-10.0.0.67:22-10.0.0.1:43084.service - OpenSSH per-connection server daemon (10.0.0.1:43084). Feb 13 19:55:10.203023 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 43084 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:55:10.204467 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:10.208030 systemd-logind[1456]: New session 26 of user core. Feb 13 19:55:10.225900 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:55:10.325985 sshd[5640]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:10.329435 systemd[1]: sshd@25-10.0.0.67:22-10.0.0.1:43084.service: Deactivated successfully. Feb 13 19:55:10.331255 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:55:10.331861 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:55:10.332730 systemd-logind[1456]: Removed session 26. Feb 13 19:55:14.635390 kubelet[2525]: E0213 19:55:14.635341 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:55:15.337749 systemd[1]: Started sshd@26-10.0.0.67:22-10.0.0.1:43094.service - OpenSSH per-connection server daemon (10.0.0.1:43094). Feb 13 19:55:15.375200 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 43094 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:55:15.376937 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:15.380974 systemd-logind[1456]: New session 27 of user core. Feb 13 19:55:15.388915 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:55:15.494212 sshd[5654]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:15.498112 systemd[1]: sshd@26-10.0.0.67:22-10.0.0.1:43094.service: Deactivated successfully. Feb 13 19:55:15.500365 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:55:15.501030 systemd-logind[1456]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:55:15.502002 systemd-logind[1456]: Removed session 27.