May 13 10:01:36.844703 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 08:42:12 -00 2025 May 13 10:01:36.844733 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=149a30fd2ffdbc3f620e76792215da346cc1a8b964894e8a61f45888248ff7ba May 13 10:01:36.844743 kernel: BIOS-provided physical RAM map: May 13 10:01:36.844750 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 10:01:36.844756 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 10:01:36.844763 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 10:01:36.844770 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 10:01:36.844780 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 10:01:36.844789 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 10:01:36.844796 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 10:01:36.844802 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 10:01:36.844809 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 10:01:36.844815 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 10:01:36.844822 kernel: NX (Execute Disable) protection: active May 13 10:01:36.844834 kernel: APIC: Static calls initialized May 13 10:01:36.844842 kernel: SMBIOS 2.8 present. May 13 10:01:36.844854 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 10:01:36.844861 kernel: DMI: Memory slots populated: 1/1 May 13 10:01:36.844888 kernel: Hypervisor detected: KVM May 13 10:01:36.844896 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 10:01:36.844903 kernel: kvm-clock: using sched offset of 4264509100 cycles May 13 10:01:36.844911 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 10:01:36.844918 kernel: tsc: Detected 2794.748 MHz processor May 13 10:01:36.844929 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 10:01:36.844937 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 10:01:36.844944 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 10:01:36.844951 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 10:01:36.844959 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 10:01:36.844966 kernel: Using GB pages for direct mapping May 13 10:01:36.844973 kernel: ACPI: Early table checksum verification disabled May 13 10:01:36.844981 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 10:01:36.844988 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:36.844998 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:36.845005 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:36.845012 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 10:01:36.845019 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:36.845027 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:36.845034 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:36.845041 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:36.845048 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 10:01:36.845061 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 10:01:36.845068 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 10:01:36.845076 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 10:01:36.845083 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 10:01:36.845090 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 10:01:36.845107 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 10:01:36.845120 kernel: No NUMA configuration found May 13 10:01:36.845128 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 10:01:36.845135 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 13 10:01:36.845143 kernel: Zone ranges: May 13 10:01:36.845150 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 10:01:36.845158 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 10:01:36.845165 kernel: Normal empty May 13 10:01:36.845172 kernel: Device empty May 13 10:01:36.845180 kernel: Movable zone start for each node May 13 10:01:36.845187 kernel: Early memory node ranges May 13 10:01:36.845197 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 10:01:36.845204 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 10:01:36.845212 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 10:01:36.845219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 10:01:36.845227 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 10:01:36.845234 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 10:01:36.845242 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 10:01:36.845252 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 10:01:36.845260 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 10:01:36.845270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 10:01:36.845278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 10:01:36.845287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 10:01:36.845295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 10:01:36.845302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 10:01:36.845309 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 10:01:36.845317 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 10:01:36.845324 kernel: TSC deadline timer available May 13 10:01:36.845332 kernel: CPU topo: Max. logical packages: 1 May 13 10:01:36.845342 kernel: CPU topo: Max. logical dies: 1 May 13 10:01:36.845349 kernel: CPU topo: Max. dies per package: 1 May 13 10:01:36.845357 kernel: CPU topo: Max. threads per core: 1 May 13 10:01:36.845364 kernel: CPU topo: Num. cores per package: 4 May 13 10:01:36.845371 kernel: CPU topo: Num. threads per package: 4 May 13 10:01:36.845379 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 13 10:01:36.845386 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 10:01:36.845393 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 10:01:36.845401 kernel: kvm-guest: setup PV sched yield May 13 10:01:36.845408 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 10:01:36.845418 kernel: Booting paravirtualized kernel on KVM May 13 10:01:36.845426 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 10:01:36.845433 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 10:01:36.845441 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 13 10:01:36.845448 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 13 10:01:36.845455 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 10:01:36.845463 kernel: kvm-guest: PV spinlocks enabled May 13 10:01:36.845470 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 10:01:36.845479 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=149a30fd2ffdbc3f620e76792215da346cc1a8b964894e8a61f45888248ff7ba May 13 10:01:36.845489 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 10:01:36.845497 kernel: random: crng init done May 13 10:01:36.845504 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 10:01:36.845512 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 10:01:36.845519 kernel: Fallback order for Node 0: 0 May 13 10:01:36.845526 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 13 10:01:36.845534 kernel: Policy zone: DMA32 May 13 10:01:36.845541 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 10:01:36.845551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 10:01:36.845558 kernel: ftrace: allocating 40071 entries in 157 pages May 13 10:01:36.845566 kernel: ftrace: allocated 157 pages with 5 groups May 13 10:01:36.845573 kernel: Dynamic Preempt: voluntary May 13 10:01:36.845580 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 10:01:36.845593 kernel: rcu: RCU event tracing is enabled. May 13 10:01:36.845601 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 10:01:36.845608 kernel: Trampoline variant of Tasks RCU enabled. May 13 10:01:36.845618 kernel: Rude variant of Tasks RCU enabled. May 13 10:01:36.845629 kernel: Tracing variant of Tasks RCU enabled. May 13 10:01:36.845636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 10:01:36.845644 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 10:01:36.845651 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:01:36.845659 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:01:36.845666 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:01:36.845674 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 10:01:36.845681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 10:01:36.845701 kernel: Console: colour VGA+ 80x25 May 13 10:01:36.845716 kernel: printk: legacy console [ttyS0] enabled May 13 10:01:36.845723 kernel: ACPI: Core revision 20240827 May 13 10:01:36.845732 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 10:01:36.845742 kernel: APIC: Switch to symmetric I/O mode setup May 13 10:01:36.845750 kernel: x2apic enabled May 13 10:01:36.845760 kernel: APIC: Switched APIC routing to: physical x2apic May 13 10:01:36.845768 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 10:01:36.845776 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 10:01:36.845786 kernel: kvm-guest: setup PV IPIs May 13 10:01:36.845794 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 10:01:36.845802 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 13 10:01:36.845810 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 10:01:36.845818 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 10:01:36.845828 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 10:01:36.845836 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 10:01:36.845846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 10:01:36.845856 kernel: Spectre V2 : Mitigation: Retpolines May 13 10:01:36.845864 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 10:01:36.845886 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 10:01:36.845896 kernel: RETBleed: Mitigation: untrained return thunk May 13 10:01:36.845904 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 10:01:36.845912 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 10:01:36.845919 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 10:01:36.845928 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 10:01:36.845936 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 10:01:36.845947 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 10:01:36.845955 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 10:01:36.845963 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 10:01:36.845970 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 10:01:36.845978 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 10:01:36.845986 kernel: Freeing SMP alternatives memory: 32K May 13 10:01:36.845994 kernel: pid_max: default: 32768 minimum: 301 May 13 10:01:36.846001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 10:01:36.846012 kernel: landlock: Up and running. May 13 10:01:36.846019 kernel: SELinux: Initializing. May 13 10:01:36.846027 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 10:01:36.846035 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 10:01:36.846045 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 10:01:36.846053 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 10:01:36.846061 kernel: ... version: 0 May 13 10:01:36.846069 kernel: ... bit width: 48 May 13 10:01:36.846076 kernel: ... generic registers: 6 May 13 10:01:36.846087 kernel: ... value mask: 0000ffffffffffff May 13 10:01:36.846095 kernel: ... max period: 00007fffffffffff May 13 10:01:36.846102 kernel: ... fixed-purpose events: 0 May 13 10:01:36.846110 kernel: ... event mask: 000000000000003f May 13 10:01:36.846118 kernel: signal: max sigframe size: 1776 May 13 10:01:36.846125 kernel: rcu: Hierarchical SRCU implementation. May 13 10:01:36.846133 kernel: rcu: Max phase no-delay instances is 400. May 13 10:01:36.846141 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 10:01:36.846149 kernel: smp: Bringing up secondary CPUs ... May 13 10:01:36.846157 kernel: smpboot: x86: Booting SMP configuration: May 13 10:01:36.846167 kernel: .... node #0, CPUs: #1 #2 #3 May 13 10:01:36.846175 kernel: smp: Brought up 1 node, 4 CPUs May 13 10:01:36.846183 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 10:01:36.846191 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9948K rodata, 54420K init, 2548K bss, 136904K reserved, 0K cma-reserved) May 13 10:01:36.846199 kernel: devtmpfs: initialized May 13 10:01:36.846206 kernel: x86/mm: Memory block size: 128MB May 13 10:01:36.846214 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 10:01:36.846222 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 10:01:36.846240 kernel: pinctrl core: initialized pinctrl subsystem May 13 10:01:36.846261 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 10:01:36.846271 kernel: audit: initializing netlink subsys (disabled) May 13 10:01:36.846279 kernel: audit: type=2000 audit(1747130493.495:1): state=initialized audit_enabled=0 res=1 May 13 10:01:36.846287 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 10:01:36.846295 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 10:01:36.846312 kernel: cpuidle: using governor menu May 13 10:01:36.846320 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 10:01:36.846328 kernel: dca service started, version 1.12.1 May 13 10:01:36.846336 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 13 10:01:36.846347 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 10:01:36.846355 kernel: PCI: Using configuration type 1 for base access May 13 10:01:36.846363 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 10:01:36.846371 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 10:01:36.846378 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 10:01:36.846386 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 10:01:36.846394 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 10:01:36.846402 kernel: ACPI: Added _OSI(Module Device) May 13 10:01:36.846412 kernel: ACPI: Added _OSI(Processor Device) May 13 10:01:36.846419 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 10:01:36.846427 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 10:01:36.846444 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 10:01:36.846453 kernel: ACPI: Interpreter enabled May 13 10:01:36.846463 kernel: ACPI: PM: (supports S0 S3 S5) May 13 10:01:36.846471 kernel: ACPI: Using IOAPIC for interrupt routing May 13 10:01:36.846479 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 10:01:36.846487 kernel: PCI: Using E820 reservations for host bridge windows May 13 10:01:36.846494 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 10:01:36.846505 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 10:01:36.846729 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 10:01:36.846856 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 10:01:36.847007 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 10:01:36.847019 kernel: PCI host bridge to bus 0000:00 May 13 10:01:36.847152 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 10:01:36.847273 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 10:01:36.847396 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 10:01:36.847508 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 10:01:36.847618 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 10:01:36.847735 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 10:01:36.847846 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 10:01:36.848021 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 13 10:01:36.848166 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 13 10:01:36.848290 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 13 10:01:36.848412 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 13 10:01:36.848565 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 13 10:01:36.848687 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 10:01:36.848931 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 10:01:36.849070 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 13 10:01:36.849190 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 13 10:01:36.849316 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 13 10:01:36.849478 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 13 10:01:36.849604 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 13 10:01:36.849735 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 13 10:01:36.849857 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 13 10:01:36.850020 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 13 10:01:36.850150 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 13 10:01:36.850271 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 13 10:01:36.850439 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 10:01:36.850564 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 13 10:01:36.850702 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 13 10:01:36.850835 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 10:01:36.851013 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 13 10:01:36.851136 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 13 10:01:36.851254 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 13 10:01:36.851394 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 13 10:01:36.851546 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 13 10:01:36.851558 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 10:01:36.851566 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 10:01:36.851584 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 10:01:36.851592 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 10:01:36.851600 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 10:01:36.851608 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 10:01:36.851615 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 10:01:36.851623 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 10:01:36.851631 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 10:01:36.851639 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 10:01:36.851647 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 10:01:36.851657 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 10:01:36.851665 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 10:01:36.851673 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 10:01:36.851681 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 10:01:36.851688 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 10:01:36.851696 kernel: iommu: Default domain type: Translated May 13 10:01:36.851704 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 10:01:36.851720 kernel: PCI: Using ACPI for IRQ routing May 13 10:01:36.851728 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 10:01:36.851738 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 10:01:36.851746 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 10:01:36.851893 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 10:01:36.852016 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 10:01:36.852136 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 10:01:36.852146 kernel: vgaarb: loaded May 13 10:01:36.852154 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 10:01:36.852162 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 10:01:36.852174 kernel: clocksource: Switched to clocksource kvm-clock May 13 10:01:36.852182 kernel: VFS: Disk quotas dquot_6.6.0 May 13 10:01:36.852190 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 10:01:36.852198 kernel: pnp: PnP ACPI init May 13 10:01:36.852341 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 10:01:36.852353 kernel: pnp: PnP ACPI: found 6 devices May 13 10:01:36.852361 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 10:01:36.852369 kernel: NET: Registered PF_INET protocol family May 13 10:01:36.852380 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 10:01:36.852388 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 10:01:36.852396 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 10:01:36.852404 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 10:01:36.852412 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 10:01:36.852420 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 10:01:36.852428 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 10:01:36.852435 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 10:01:36.852443 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 10:01:36.852453 kernel: NET: Registered PF_XDP protocol family May 13 10:01:36.852565 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 10:01:36.852675 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 10:01:36.852793 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 10:01:36.852937 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 10:01:36.853048 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 10:01:36.853156 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 10:01:36.853166 kernel: PCI: CLS 0 bytes, default 64 May 13 10:01:36.853179 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 13 10:01:36.853187 kernel: Initialise system trusted keyrings May 13 10:01:36.853195 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 10:01:36.853202 kernel: Key type asymmetric registered May 13 10:01:36.853210 kernel: Asymmetric key parser 'x509' registered May 13 10:01:36.853218 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 10:01:36.853226 kernel: io scheduler mq-deadline registered May 13 10:01:36.853234 kernel: io scheduler kyber registered May 13 10:01:36.853242 kernel: io scheduler bfq registered May 13 10:01:36.853251 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 10:01:36.853260 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 10:01:36.853268 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 10:01:36.853276 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 10:01:36.853283 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 10:01:36.853291 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 10:01:36.853299 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 10:01:36.853307 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 10:01:36.853315 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 10:01:36.853323 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 10:01:36.853484 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 10:01:36.853601 kernel: rtc_cmos 00:04: registered as rtc0 May 13 10:01:36.853724 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T10:01:36 UTC (1747130496) May 13 10:01:36.853839 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 10:01:36.853849 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 10:01:36.853857 kernel: NET: Registered PF_INET6 protocol family May 13 10:01:36.853865 kernel: Segment Routing with IPv6 May 13 10:01:36.853903 kernel: In-situ OAM (IOAM) with IPv6 May 13 10:01:36.853922 kernel: NET: Registered PF_PACKET protocol family May 13 10:01:36.853930 kernel: Key type dns_resolver registered May 13 10:01:36.853938 kernel: IPI shorthand broadcast: enabled May 13 10:01:36.853946 kernel: sched_clock: Marking stable (2929002100, 111778559)->(3068669267, -27888608) May 13 10:01:36.853954 kernel: registered taskstats version 1 May 13 10:01:36.853962 kernel: Loading compiled-in X.509 certificates May 13 10:01:36.853970 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: 5c3cbe19210297b32e5cab2ad262e7b96f0f791c' May 13 10:01:36.853977 kernel: Demotion targets for Node 0: null May 13 10:01:36.853988 kernel: Key type .fscrypt registered May 13 10:01:36.853996 kernel: Key type fscrypt-provisioning registered May 13 10:01:36.854004 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 10:01:36.854012 kernel: ima: Allocated hash algorithm: sha1 May 13 10:01:36.854019 kernel: ima: No architecture policies found May 13 10:01:36.854027 kernel: clk: Disabling unused clocks May 13 10:01:36.854035 kernel: Warning: unable to open an initial console. May 13 10:01:36.854043 kernel: Freeing unused kernel image (initmem) memory: 54420K May 13 10:01:36.854051 kernel: Write protecting the kernel read-only data: 24576k May 13 10:01:36.854060 kernel: Freeing unused kernel image (rodata/data gap) memory: 292K May 13 10:01:36.854068 kernel: Run /init as init process May 13 10:01:36.854076 kernel: with arguments: May 13 10:01:36.854084 kernel: /init May 13 10:01:36.854091 kernel: with environment: May 13 10:01:36.854099 kernel: HOME=/ May 13 10:01:36.854106 kernel: TERM=linux May 13 10:01:36.854114 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 10:01:36.854123 systemd[1]: Successfully made /usr/ read-only. May 13 10:01:36.854136 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 10:01:36.854156 systemd[1]: Detected virtualization kvm. May 13 10:01:36.854164 systemd[1]: Detected architecture x86-64. May 13 10:01:36.854173 systemd[1]: Running in initrd. May 13 10:01:36.854181 systemd[1]: No hostname configured, using default hostname. May 13 10:01:36.854192 systemd[1]: Hostname set to . May 13 10:01:36.854200 systemd[1]: Initializing machine ID from VM UUID. May 13 10:01:36.854208 systemd[1]: Queued start job for default target initrd.target. May 13 10:01:36.854217 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:01:36.854225 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:01:36.854235 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 10:01:36.854243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 10:01:36.854252 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 10:01:36.854264 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 10:01:36.854273 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 10:01:36.854282 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 10:01:36.854291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:01:36.854299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 10:01:36.854308 systemd[1]: Reached target paths.target - Path Units. May 13 10:01:36.854316 systemd[1]: Reached target slices.target - Slice Units. May 13 10:01:36.854327 systemd[1]: Reached target swap.target - Swaps. May 13 10:01:36.854335 systemd[1]: Reached target timers.target - Timer Units. May 13 10:01:36.854344 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 10:01:36.854352 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 10:01:36.854361 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 10:01:36.854369 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 10:01:36.854378 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 10:01:36.854387 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 10:01:36.854395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:01:36.854405 systemd[1]: Reached target sockets.target - Socket Units. May 13 10:01:36.854414 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 10:01:36.854422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 10:01:36.854431 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 10:01:36.854442 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 10:01:36.854453 systemd[1]: Starting systemd-fsck-usr.service... May 13 10:01:36.854461 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 10:01:36.854470 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 10:01:36.854478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:01:36.854487 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 10:01:36.854505 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:01:36.854516 systemd[1]: Finished systemd-fsck-usr.service. May 13 10:01:36.854545 systemd-journald[219]: Collecting audit messages is disabled. May 13 10:01:36.854566 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 10:01:36.854576 systemd-journald[219]: Journal started May 13 10:01:36.854594 systemd-journald[219]: Runtime Journal (/run/log/journal/d25923e9ba0d4e32aa8226e6e571d48c) is 6M, max 48.6M, 42.5M free. May 13 10:01:36.848947 systemd-modules-load[221]: Inserted module 'overlay' May 13 10:01:36.857060 systemd[1]: Started systemd-journald.service - Journal Service. May 13 10:01:36.860186 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 10:01:36.896614 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 10:01:36.896641 kernel: Bridge firewalling registered May 13 10:01:36.877684 systemd-modules-load[221]: Inserted module 'br_netfilter' May 13 10:01:36.899051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 10:01:36.901833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:36.902773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 10:01:36.906989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 10:01:36.908778 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:01:36.912002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 10:01:36.919982 systemd-tmpfiles[236]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 10:01:36.925715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:01:36.928836 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:01:36.932340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:01:36.934997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 10:01:36.949032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 10:01:36.950546 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 10:01:36.969670 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=149a30fd2ffdbc3f620e76792215da346cc1a8b964894e8a61f45888248ff7ba May 13 10:01:36.986662 systemd-resolved[254]: Positive Trust Anchors: May 13 10:01:36.986677 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 10:01:36.986716 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 10:01:36.989621 systemd-resolved[254]: Defaulting to hostname 'linux'. May 13 10:01:36.990953 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 10:01:36.995904 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 10:01:37.090925 kernel: SCSI subsystem initialized May 13 10:01:37.101903 kernel: Loading iSCSI transport class v2.0-870. May 13 10:01:37.113921 kernel: iscsi: registered transport (tcp) May 13 10:01:37.136958 kernel: iscsi: registered transport (qla4xxx) May 13 10:01:37.137018 kernel: QLogic iSCSI HBA Driver May 13 10:01:37.160802 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 10:01:37.186252 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:01:37.190447 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 10:01:37.280075 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 10:01:37.283713 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 10:01:37.349896 kernel: raid6: avx2x4 gen() 30612 MB/s May 13 10:01:37.366902 kernel: raid6: avx2x2 gen() 30950 MB/s May 13 10:01:37.383973 kernel: raid6: avx2x1 gen() 26018 MB/s May 13 10:01:37.383992 kernel: raid6: using algorithm avx2x2 gen() 30950 MB/s May 13 10:01:37.401990 kernel: raid6: .... xor() 19855 MB/s, rmw enabled May 13 10:01:37.402012 kernel: raid6: using avx2x2 recovery algorithm May 13 10:01:37.424897 kernel: xor: automatically using best checksumming function avx May 13 10:01:37.602942 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 10:01:37.612916 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 10:01:37.616728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:01:37.650344 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 13 10:01:37.656273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:01:37.660068 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 10:01:37.691247 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation May 13 10:01:37.722489 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 10:01:37.726264 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 10:01:37.806899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:01:37.810860 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 10:01:37.854901 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 10:01:37.861590 kernel: cryptd: max_cpu_qlen set to 1000 May 13 10:01:37.861620 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 10:01:37.868203 kernel: AES CTR mode by8 optimization enabled May 13 10:01:37.873397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 10:01:37.873451 kernel: GPT:9289727 != 19775487 May 13 10:01:37.873462 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 10:01:37.873473 kernel: GPT:9289727 != 19775487 May 13 10:01:37.873490 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 10:01:37.873500 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:01:37.882947 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 13 10:01:37.885899 kernel: libata version 3.00 loaded. May 13 10:01:37.895398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 10:01:37.897097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:37.900309 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:01:37.905174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:01:37.906891 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 10:01:37.931923 kernel: ahci 0000:00:1f.2: version 3.0 May 13 10:01:37.932228 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 10:01:37.933368 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 13 10:01:37.934953 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 13 10:01:37.935122 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 10:01:37.936995 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 10:01:37.941756 kernel: scsi host0: ahci May 13 10:01:37.942036 kernel: scsi host1: ahci May 13 10:01:37.942188 kernel: scsi host2: ahci May 13 10:01:37.942355 kernel: scsi host3: ahci May 13 10:01:37.942540 kernel: scsi host4: ahci May 13 10:01:37.942697 kernel: scsi host5: ahci May 13 10:01:37.943943 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 13 10:01:37.943970 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 13 10:01:37.943982 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 13 10:01:37.946735 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 13 10:01:37.946752 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 13 10:01:37.946763 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 13 10:01:37.948565 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 10:01:37.949518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 10:01:37.961487 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 10:01:37.976547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 10:01:38.006076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:38.009842 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 10:01:38.035382 disk-uuid[635]: Primary Header is updated. May 13 10:01:38.035382 disk-uuid[635]: Secondary Entries is updated. May 13 10:01:38.035382 disk-uuid[635]: Secondary Header is updated. May 13 10:01:38.039917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:01:38.044904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:01:38.259858 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 10:01:38.259934 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 10:01:38.259946 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 10:01:38.259956 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 10:01:38.260926 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 10:01:38.261901 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 10:01:38.262896 kernel: ata3.00: applying bridge limits May 13 10:01:38.262909 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 10:01:38.263923 kernel: ata3.00: configured for UDMA/100 May 13 10:01:38.265899 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 10:01:38.325444 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 10:01:38.325705 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 10:01:38.344034 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 10:01:38.754290 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 10:01:38.755678 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 10:01:38.757632 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:01:38.758833 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 10:01:38.761926 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 10:01:38.795303 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 10:01:39.045906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:01:39.046304 disk-uuid[636]: The operation has completed successfully. May 13 10:01:39.077224 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 10:01:39.077366 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 10:01:39.112814 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 10:01:39.136172 sh[664]: Success May 13 10:01:39.154694 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 10:01:39.154765 kernel: device-mapper: uevent: version 1.0.3 May 13 10:01:39.154778 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 10:01:39.163914 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 13 10:01:39.195801 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 10:01:39.200080 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 10:01:39.222374 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 10:01:39.229576 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 10:01:39.229601 kernel: BTRFS: device fsid ffca113e-5abc-43bf-8c02-7bfa2cadf852 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (676) May 13 10:01:39.229897 kernel: BTRFS info (device dm-0): first mount of filesystem ffca113e-5abc-43bf-8c02-7bfa2cadf852 May 13 10:01:39.232489 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 10:01:39.232506 kernel: BTRFS info (device dm-0): using free-space-tree May 13 10:01:39.236929 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 10:01:39.238228 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 10:01:39.239749 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 10:01:39.240520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 10:01:39.242351 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 10:01:39.268907 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (707) May 13 10:01:39.268967 kernel: BTRFS info (device vda6): first mount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:39.271435 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 10:01:39.271468 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:01:39.277927 kernel: BTRFS info (device vda6): last unmount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:39.278857 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 10:01:39.282378 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 10:01:39.387359 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 10:01:39.389671 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 10:01:39.506544 ignition[752]: Ignition 2.21.0 May 13 10:01:39.506559 ignition[752]: Stage: fetch-offline May 13 10:01:39.506623 ignition[752]: no configs at "/usr/lib/ignition/base.d" May 13 10:01:39.506634 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:39.506768 ignition[752]: parsed url from cmdline: "" May 13 10:01:39.506772 ignition[752]: no config URL provided May 13 10:01:39.506777 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" May 13 10:01:39.506786 ignition[752]: no config at "/usr/lib/ignition/user.ign" May 13 10:01:39.506815 ignition[752]: op(1): [started] loading QEMU firmware config module May 13 10:01:39.506820 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 10:01:39.524702 ignition[752]: op(1): [finished] loading QEMU firmware config module May 13 10:01:39.556409 systemd-networkd[856]: lo: Link UP May 13 10:01:39.556421 systemd-networkd[856]: lo: Gained carrier May 13 10:01:39.558330 systemd-networkd[856]: Enumeration completed May 13 10:01:39.558799 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:01:39.558803 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 10:01:39.559024 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 10:01:39.559548 systemd-networkd[856]: eth0: Link UP May 13 10:01:39.559552 systemd-networkd[856]: eth0: Gained carrier May 13 10:01:39.559560 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:01:39.561048 systemd[1]: Reached target network.target - Network. May 13 10:01:39.575902 ignition[752]: parsing config with SHA512: fd28c9019b10b29999345b32090ec6d63db3bf7ad3b28a968a956cd6621a83c05ac417609a36fc6e8993fa9644a5ffaefa06b309d808fb1e853679fe085277a8 May 13 10:01:39.580307 unknown[752]: fetched base config from "system" May 13 10:01:39.580321 unknown[752]: fetched user config from "qemu" May 13 10:01:39.580709 ignition[752]: fetch-offline: fetch-offline passed May 13 10:01:39.580779 ignition[752]: Ignition finished successfully May 13 10:01:39.585736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 10:01:39.587137 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 10:01:39.587929 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 10:01:39.590657 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 10:01:39.664090 ignition[869]: Ignition 2.21.0 May 13 10:01:39.664104 ignition[869]: Stage: kargs May 13 10:01:39.664237 ignition[869]: no configs at "/usr/lib/ignition/base.d" May 13 10:01:39.664249 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:39.665208 ignition[869]: kargs: kargs passed May 13 10:01:39.665336 ignition[869]: Ignition finished successfully May 13 10:01:39.673066 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 10:01:39.676112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 10:01:39.712486 ignition[878]: Ignition 2.21.0 May 13 10:01:39.713079 ignition[878]: Stage: disks May 13 10:01:39.713291 ignition[878]: no configs at "/usr/lib/ignition/base.d" May 13 10:01:39.713302 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:39.714286 ignition[878]: disks: disks passed May 13 10:01:39.714349 ignition[878]: Ignition finished successfully May 13 10:01:39.719957 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 10:01:39.720602 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 10:01:39.722252 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 10:01:39.722573 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 10:01:39.723136 systemd[1]: Reached target sysinit.target - System Initialization. May 13 10:01:39.723464 systemd[1]: Reached target basic.target - Basic System. May 13 10:01:39.724788 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 10:01:39.751492 systemd-fsck[889]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 10:01:39.759721 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 10:01:39.762417 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 10:01:39.897895 kernel: EXT4-fs (vda9): mounted filesystem b5db2f60-6937-4957-9fc1-2577b44e4198 r/w with ordered data mode. Quota mode: none. May 13 10:01:39.898606 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 10:01:39.901070 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 10:01:39.904543 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 10:01:39.907160 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 10:01:39.909769 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 10:01:39.911980 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 10:01:39.912030 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 10:01:39.919720 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 10:01:39.923230 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 10:01:39.924293 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (897) May 13 10:01:39.927908 kernel: BTRFS info (device vda6): first mount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:39.927977 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 10:01:39.929259 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:01:39.933634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 10:01:39.962845 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory May 13 10:01:39.967916 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory May 13 10:01:39.972913 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory May 13 10:01:39.977493 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory May 13 10:01:40.071669 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 10:01:40.073829 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 10:01:40.075463 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 10:01:40.091891 kernel: BTRFS info (device vda6): last unmount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:40.103789 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 10:01:40.121894 ignition[1011]: INFO : Ignition 2.21.0 May 13 10:01:40.121894 ignition[1011]: INFO : Stage: mount May 13 10:01:40.123771 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:01:40.123771 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:40.123771 ignition[1011]: INFO : mount: mount passed May 13 10:01:40.123771 ignition[1011]: INFO : Ignition finished successfully May 13 10:01:40.128201 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 10:01:40.131925 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 10:01:40.228824 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 10:01:40.230605 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 10:01:40.256904 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1023) May 13 10:01:40.259046 kernel: BTRFS info (device vda6): first mount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:40.259074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 10:01:40.259092 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:01:40.263357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 10:01:40.307681 ignition[1040]: INFO : Ignition 2.21.0 May 13 10:01:40.307681 ignition[1040]: INFO : Stage: files May 13 10:01:40.309646 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:01:40.309646 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:40.309646 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping May 13 10:01:40.314078 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 10:01:40.314078 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 10:01:40.317276 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 10:01:40.318858 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 10:01:40.318858 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 10:01:40.318672 unknown[1040]: wrote ssh authorized keys file for user: core May 13 10:01:40.323280 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 10:01:40.323280 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 10:01:40.491372 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 10:01:40.629709 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 10:01:40.629709 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 10:01:40.633786 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 10:01:40.633786 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 10:01:40.633786 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 10:01:40.633786 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 10:01:40.633786 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 10:01:40.633786 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 10:01:40.633786 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 10:01:40.646330 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 10:01:40.646330 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 10:01:40.646330 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 10:01:40.646330 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 10:01:40.646330 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 10:01:40.646330 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 10:01:40.812133 systemd-networkd[856]: eth0: Gained IPv6LL May 13 10:01:40.976807 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 10:01:41.344599 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 10:01:41.344599 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 10:01:41.348774 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 10:01:41.379788 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 10:01:41.379788 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 10:01:41.379788 ignition[1040]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 10:01:41.385168 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 10:01:41.385168 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 10:01:41.385168 ignition[1040]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 10:01:41.385168 ignition[1040]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 10:01:41.401128 ignition[1040]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 10:01:41.405247 ignition[1040]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 10:01:41.407011 ignition[1040]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 10:01:41.407011 ignition[1040]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 10:01:41.407011 ignition[1040]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 10:01:41.407011 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 10:01:41.407011 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 10:01:41.407011 ignition[1040]: INFO : files: files passed May 13 10:01:41.407011 ignition[1040]: INFO : Ignition finished successfully May 13 10:01:41.415571 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 10:01:41.419608 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 10:01:41.422316 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 10:01:41.443369 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 10:01:41.443541 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 10:01:41.447894 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory May 13 10:01:41.451901 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 10:01:41.451901 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 10:01:41.455452 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 10:01:41.458469 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 10:01:41.459147 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 10:01:41.463189 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 10:01:41.536068 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 10:01:41.537190 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 10:01:41.540323 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 10:01:41.542726 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 10:01:41.545094 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 10:01:41.547664 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 10:01:41.587714 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 10:01:41.590490 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 10:01:41.617680 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 10:01:41.618979 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:01:41.621184 systemd[1]: Stopped target timers.target - Timer Units. May 13 10:01:41.623177 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 10:01:41.623294 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 10:01:41.625421 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 10:01:41.627177 systemd[1]: Stopped target basic.target - Basic System. May 13 10:01:41.629211 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 10:01:41.631259 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 10:01:41.633275 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 10:01:41.635442 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 10:01:41.637645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 10:01:41.639724 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 10:01:41.642001 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 10:01:41.644024 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 10:01:41.646236 systemd[1]: Stopped target swap.target - Swaps. May 13 10:01:41.648000 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 10:01:41.648112 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 10:01:41.650251 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 10:01:41.651862 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:01:41.653937 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 10:01:41.654079 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:01:41.656161 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 10:01:41.656276 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 10:01:41.658466 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 10:01:41.658589 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 10:01:41.660598 systemd[1]: Stopped target paths.target - Path Units. May 13 10:01:41.662324 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 10:01:41.666692 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:01:41.668010 systemd[1]: Stopped target slices.target - Slice Units. May 13 10:01:41.669974 systemd[1]: Stopped target sockets.target - Socket Units. May 13 10:01:41.671707 systemd[1]: iscsid.socket: Deactivated successfully. May 13 10:01:41.671799 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 10:01:41.673716 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 10:01:41.673805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 10:01:41.676141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 10:01:41.676257 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 10:01:41.678171 systemd[1]: ignition-files.service: Deactivated successfully. May 13 10:01:41.678276 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 10:01:41.680859 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 10:01:41.682565 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 10:01:41.682690 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:01:41.685401 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 10:01:41.686368 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 10:01:41.686488 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:01:41.688691 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 10:01:41.688800 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 10:01:41.694939 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 10:01:41.695046 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 10:01:41.708352 ignition[1095]: INFO : Ignition 2.21.0 May 13 10:01:41.708352 ignition[1095]: INFO : Stage: umount May 13 10:01:41.710187 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:01:41.710187 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:41.712666 ignition[1095]: INFO : umount: umount passed May 13 10:01:41.712666 ignition[1095]: INFO : Ignition finished successfully May 13 10:01:41.713530 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 10:01:41.714195 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 10:01:41.714320 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 10:01:41.715711 systemd[1]: Stopped target network.target - Network. May 13 10:01:41.716889 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 10:01:41.716970 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 10:01:41.717388 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 10:01:41.717439 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 10:01:41.717713 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 10:01:41.717771 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 10:01:41.718206 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 10:01:41.718260 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 10:01:41.718714 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 10:01:41.726280 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 10:01:41.735814 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 10:01:41.736010 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 10:01:41.740586 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 10:01:41.741282 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 10:01:41.741391 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:01:41.745150 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 10:01:41.747700 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 10:01:41.747850 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 10:01:41.751796 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 10:01:41.752017 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 10:01:41.755409 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 10:01:41.755452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 10:01:41.760036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 10:01:41.760507 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 10:01:41.760558 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 10:01:41.760904 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 10:01:41.760954 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 10:01:41.766885 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 10:01:41.766936 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 10:01:41.767443 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:01:41.768597 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 10:01:41.793002 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 10:01:41.793247 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:01:41.794046 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 10:01:41.794104 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 10:01:41.798417 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 10:01:41.798469 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:01:41.799115 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 10:01:41.799183 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 10:01:41.799825 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 10:01:41.799907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 10:01:41.800625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 10:01:41.800688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 10:01:41.802450 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 10:01:41.823452 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 10:01:41.823536 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:01:41.828307 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 10:01:41.828358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:01:41.831908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 10:01:41.831960 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:41.835782 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 10:01:41.837158 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 10:01:41.844972 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 10:01:41.845100 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 10:01:41.868935 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 10:01:41.869071 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 10:01:41.869727 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 10:01:41.870352 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 10:01:41.870409 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 10:01:41.871545 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 10:01:41.896547 systemd[1]: Switching root. May 13 10:01:41.941128 systemd-journald[219]: Journal stopped May 13 10:01:43.131450 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). May 13 10:01:43.131512 kernel: SELinux: policy capability network_peer_controls=1 May 13 10:01:43.131526 kernel: SELinux: policy capability open_perms=1 May 13 10:01:43.131537 kernel: SELinux: policy capability extended_socket_class=1 May 13 10:01:43.131556 kernel: SELinux: policy capability always_check_network=0 May 13 10:01:43.131575 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 10:01:43.131589 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 10:01:43.131605 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 10:01:43.131619 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 10:01:43.131630 kernel: SELinux: policy capability userspace_initial_context=0 May 13 10:01:43.131642 kernel: audit: type=1403 audit(1747130502.345:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 10:01:43.131655 systemd[1]: Successfully loaded SELinux policy in 47.270ms. May 13 10:01:43.131688 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.372ms. May 13 10:01:43.131704 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 10:01:43.131722 systemd[1]: Detected virtualization kvm. May 13 10:01:43.131734 systemd[1]: Detected architecture x86-64. May 13 10:01:43.131746 systemd[1]: Detected first boot. May 13 10:01:43.131758 systemd[1]: Initializing machine ID from VM UUID. May 13 10:01:43.131770 zram_generator::config[1140]: No configuration found. May 13 10:01:43.131783 kernel: Guest personality initialized and is inactive May 13 10:01:43.131795 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 10:01:43.131806 kernel: Initialized host personality May 13 10:01:43.131822 kernel: NET: Registered PF_VSOCK protocol family May 13 10:01:43.131834 systemd[1]: Populated /etc with preset unit settings. May 13 10:01:43.131848 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 10:01:43.131860 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 10:01:43.131886 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 10:01:43.131910 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 10:01:43.131925 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 10:01:43.131942 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 10:01:43.131955 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 10:01:43.131975 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 10:01:43.131987 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 10:01:43.132000 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 10:01:43.132012 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 10:01:43.132024 systemd[1]: Created slice user.slice - User and Session Slice. May 13 10:01:43.132037 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:01:43.132050 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:01:43.132062 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 10:01:43.132075 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 10:01:43.132092 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 10:01:43.132105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 10:01:43.132121 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 10:01:43.132140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:01:43.132152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 10:01:43.132164 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 10:01:43.132178 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 10:01:43.132195 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 10:01:43.132208 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 10:01:43.132220 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:01:43.132232 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 10:01:43.132244 systemd[1]: Reached target slices.target - Slice Units. May 13 10:01:43.132256 systemd[1]: Reached target swap.target - Swaps. May 13 10:01:43.132268 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 10:01:43.132280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 10:01:43.132292 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 10:01:43.132310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 10:01:43.132322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 10:01:43.132334 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:01:43.132346 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 10:01:43.132358 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 10:01:43.132371 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 10:01:43.132384 systemd[1]: Mounting media.mount - External Media Directory... May 13 10:01:43.132396 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:43.132408 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 10:01:43.132425 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 10:01:43.132437 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 10:01:43.132450 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 10:01:43.132462 systemd[1]: Reached target machines.target - Containers. May 13 10:01:43.132474 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 10:01:43.132486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:01:43.132498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 10:01:43.132511 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 10:01:43.132523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:01:43.132539 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 10:01:43.132560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:01:43.132572 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 10:01:43.132591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:01:43.132604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 10:01:43.132617 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 10:01:43.132629 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 10:01:43.132641 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 10:01:43.132658 systemd[1]: Stopped systemd-fsck-usr.service. May 13 10:01:43.132672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:01:43.132684 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 10:01:43.132696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 10:01:43.132708 kernel: fuse: init (API version 7.41) May 13 10:01:43.132720 kernel: loop: module loaded May 13 10:01:43.132732 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 10:01:43.132744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 10:01:43.132757 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 10:01:43.132774 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 10:01:43.132787 systemd[1]: verity-setup.service: Deactivated successfully. May 13 10:01:43.132799 systemd[1]: Stopped verity-setup.service. May 13 10:01:43.132812 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:43.132824 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 10:01:43.132840 kernel: ACPI: bus type drm_connector registered May 13 10:01:43.132852 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 10:01:43.132864 systemd[1]: Mounted media.mount - External Media Directory. May 13 10:01:43.132898 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 10:01:43.132910 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 10:01:43.132951 systemd-journald[1217]: Collecting audit messages is disabled. May 13 10:01:43.132983 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 10:01:43.132996 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 10:01:43.133008 systemd-journald[1217]: Journal started May 13 10:01:43.133030 systemd-journald[1217]: Runtime Journal (/run/log/journal/d25923e9ba0d4e32aa8226e6e571d48c) is 6M, max 48.6M, 42.5M free. May 13 10:01:42.881773 systemd[1]: Queued start job for default target multi-user.target. May 13 10:01:42.897056 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 10:01:42.897539 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 10:01:43.135925 systemd[1]: Started systemd-journald.service - Journal Service. May 13 10:01:43.137200 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:01:43.138823 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 10:01:43.139129 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 10:01:43.140643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:01:43.140892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:01:43.142352 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 10:01:43.142589 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 10:01:43.143972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:01:43.144189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:01:43.145722 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 10:01:43.145968 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 10:01:43.147374 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:01:43.147599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:01:43.149149 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 10:01:43.150607 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:01:43.152210 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 10:01:43.153831 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 10:01:43.170618 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 10:01:43.174014 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 10:01:43.176957 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 10:01:43.178163 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 10:01:43.178198 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 10:01:43.180233 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 10:01:43.184821 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 10:01:43.186045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:01:43.188021 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 10:01:43.191059 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 10:01:43.193317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 10:01:43.195682 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 10:01:43.196861 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 10:01:43.202070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:01:43.206015 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 10:01:43.210054 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 10:01:43.216099 systemd-journald[1217]: Time spent on flushing to /var/log/journal/d25923e9ba0d4e32aa8226e6e571d48c is 21.567ms for 974 entries. May 13 10:01:43.216099 systemd-journald[1217]: System Journal (/var/log/journal/d25923e9ba0d4e32aa8226e6e571d48c) is 8M, max 195.6M, 187.6M free. May 13 10:01:43.270676 systemd-journald[1217]: Received client request to flush runtime journal. May 13 10:01:43.270744 kernel: loop0: detected capacity change from 0 to 210664 May 13 10:01:43.215558 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 10:01:43.273306 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 10:01:43.218337 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 10:01:43.220269 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 10:01:43.228886 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 10:01:43.234042 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 10:01:43.246327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:01:43.248387 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:01:43.272374 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 10:01:43.278896 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 10:01:43.282467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 10:01:43.284734 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 10:01:43.294953 kernel: loop1: detected capacity change from 0 to 146240 May 13 10:01:43.314168 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 13 10:01:43.314190 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 13 10:01:43.320651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:01:43.325036 kernel: loop2: detected capacity change from 0 to 113872 May 13 10:01:43.362914 kernel: loop3: detected capacity change from 0 to 210664 May 13 10:01:43.373905 kernel: loop4: detected capacity change from 0 to 146240 May 13 10:01:43.388904 kernel: loop5: detected capacity change from 0 to 113872 May 13 10:01:43.400051 (sd-merge)[1281]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 10:01:43.400800 (sd-merge)[1281]: Merged extensions into '/usr'. May 13 10:01:43.407404 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... May 13 10:01:43.407586 systemd[1]: Reloading... May 13 10:01:43.483919 zram_generator::config[1307]: No configuration found. May 13 10:01:43.552089 ldconfig[1254]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 10:01:43.596444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:01:43.688193 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 10:01:43.688828 systemd[1]: Reloading finished in 280 ms. May 13 10:01:43.712364 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 10:01:43.714047 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 10:01:43.735593 systemd[1]: Starting ensure-sysext.service... May 13 10:01:43.737584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 10:01:43.758167 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 10:01:43.758212 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 10:01:43.758543 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 10:01:43.758833 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 10:01:43.759856 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 10:01:43.760219 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. May 13 10:01:43.760348 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. May 13 10:01:43.761758 systemd[1]: Reload requested from client PID 1344 ('systemctl') (unit ensure-sysext.service)... May 13 10:01:43.761773 systemd[1]: Reloading... May 13 10:01:43.765438 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. May 13 10:01:43.765519 systemd-tmpfiles[1345]: Skipping /boot May 13 10:01:43.778404 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. May 13 10:01:43.778421 systemd-tmpfiles[1345]: Skipping /boot May 13 10:01:43.823902 zram_generator::config[1378]: No configuration found. May 13 10:01:43.911060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:01:43.994398 systemd[1]: Reloading finished in 232 ms. May 13 10:01:44.021444 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 10:01:44.041555 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:01:44.050849 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:01:44.053277 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 10:01:44.055647 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 10:01:44.066926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 10:01:44.070355 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:01:44.073114 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 10:01:44.076216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:44.076398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:01:44.078552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:01:44.087899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:01:44.092078 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:01:44.093393 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:01:44.093494 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:01:44.093611 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:44.094752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:01:44.095036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:01:44.100491 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 10:01:44.105442 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:01:44.105795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:01:44.107476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:01:44.107724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:01:44.117157 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 10:01:44.120731 systemd-udevd[1415]: Using default interface naming scheme 'v255'. May 13 10:01:44.124601 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:44.124796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:01:44.126264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:01:44.128587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:01:44.132266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:01:44.133414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:01:44.133821 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:01:44.138948 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 10:01:44.142364 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 10:01:44.143758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:44.146221 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:01:44.146464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:01:44.148317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:01:44.148550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:01:44.150662 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:01:44.151136 augenrules[1449]: No rules May 13 10:01:44.156177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:01:44.158118 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:01:44.158391 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:01:44.159965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 10:01:44.161827 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 10:01:44.171402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:01:44.175235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:44.180074 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:01:44.181216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:01:44.185092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:01:44.196076 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 10:01:44.199059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:01:44.201952 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:01:44.203133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:01:44.203173 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:01:44.207088 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 10:01:44.208970 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 10:01:44.209008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:44.209744 systemd[1]: Finished ensure-sysext.service. May 13 10:01:44.211281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:01:44.211550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:01:44.221176 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:01:44.222999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:01:44.227591 augenrules[1479]: /sbin/augenrules: No change May 13 10:01:44.230469 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 10:01:44.234682 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 10:01:44.240701 augenrules[1511]: No rules May 13 10:01:44.242311 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:01:44.243383 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:01:44.255059 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 10:01:44.256049 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 10:01:44.257658 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 10:01:44.259271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:01:44.259499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:01:44.262769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 10:01:44.286788 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 10:01:44.379905 kernel: mousedev: PS/2 mouse device common for all mice May 13 10:01:44.379966 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 10:01:44.388638 systemd-resolved[1414]: Positive Trust Anchors: May 13 10:01:44.388654 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 10:01:44.388686 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 10:01:44.389454 systemd-networkd[1496]: lo: Link UP May 13 10:01:44.389458 systemd-networkd[1496]: lo: Gained carrier May 13 10:01:44.392752 systemd-networkd[1496]: Enumeration completed May 13 10:01:44.393017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 10:01:44.393575 systemd-resolved[1414]: Defaulting to hostname 'linux'. May 13 10:01:44.394464 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 10:01:44.395208 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:01:44.395222 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 10:01:44.395941 systemd-networkd[1496]: eth0: Link UP May 13 10:01:44.396116 systemd-networkd[1496]: eth0: Gained carrier May 13 10:01:44.396136 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:01:44.398427 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 10:01:44.402127 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 10:01:44.404625 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 10:01:44.405909 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 10:01:44.406395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 10:01:44.409965 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 10:01:44.410741 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. May 13 10:01:44.411808 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 10:01:44.411860 systemd-timesyncd[1510]: Initial clock synchronization to Tue 2025-05-13 10:01:44.358427 UTC. May 13 10:01:44.413916 kernel: ACPI: button: Power Button [PWRF] May 13 10:01:44.421926 systemd[1]: Reached target network.target - Network. May 13 10:01:44.422197 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 10:01:44.422570 systemd[1]: Reached target sysinit.target - System Initialization. May 13 10:01:44.423307 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 10:01:44.423595 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 10:01:44.425276 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 13 10:01:44.425632 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 10:01:44.426132 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 10:01:44.426153 systemd[1]: Reached target paths.target - Path Units. May 13 10:01:44.426459 systemd[1]: Reached target time-set.target - System Time Set. May 13 10:01:44.426979 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 10:01:44.427410 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 10:01:44.427630 systemd[1]: Reached target timers.target - Timer Units. May 13 10:01:44.429777 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 10:01:44.431818 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 10:01:44.436051 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 10:01:44.436607 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 10:01:44.436811 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 10:01:44.452318 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 10:01:44.454292 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 10:01:44.457968 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 10:01:44.460292 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 10:01:44.465030 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 10:01:44.465290 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 10:01:44.461974 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 10:01:44.467449 systemd[1]: Reached target sockets.target - Socket Units. May 13 10:01:44.468764 systemd[1]: Reached target basic.target - Basic System. May 13 10:01:44.471081 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 10:01:44.471121 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 10:01:44.472950 systemd[1]: Starting containerd.service - containerd container runtime... May 13 10:01:44.476056 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 10:01:44.479104 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 10:01:44.482060 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 10:01:44.489209 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 10:01:44.491410 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 10:01:44.493988 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 13 10:01:44.496855 jq[1565]: false May 13 10:01:44.499324 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 10:01:44.503053 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 10:01:44.505490 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing passwd entry cache May 13 10:01:44.505751 oslogin_cache_refresh[1567]: Refreshing passwd entry cache May 13 10:01:44.506069 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 10:01:44.510994 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 10:01:44.519126 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting users, quitting May 13 10:01:44.519126 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 10:01:44.519126 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing group entry cache May 13 10:01:44.518685 oslogin_cache_refresh[1567]: Failure getting users, quitting May 13 10:01:44.518706 oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 10:01:44.518764 oslogin_cache_refresh[1567]: Refreshing group entry cache May 13 10:01:44.523050 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 10:01:44.525024 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 10:01:44.525555 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 10:01:44.528119 systemd[1]: Starting update-engine.service - Update Engine... May 13 10:01:44.530194 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 10:01:44.535940 extend-filesystems[1566]: Found loop3 May 13 10:01:44.537394 extend-filesystems[1566]: Found loop4 May 13 10:01:44.538230 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 10:01:44.540315 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 10:01:44.541059 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 10:01:44.541382 systemd[1]: motdgen.service: Deactivated successfully. May 13 10:01:44.542685 oslogin_cache_refresh[1567]: Failure getting groups, quitting May 13 10:01:44.543152 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting groups, quitting May 13 10:01:44.543152 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 10:01:44.541642 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 10:01:44.543192 oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 10:01:44.543925 extend-filesystems[1566]: Found loop5 May 13 10:01:44.543925 extend-filesystems[1566]: Found sr0 May 13 10:01:44.543925 extend-filesystems[1566]: Found vda May 13 10:01:44.543925 extend-filesystems[1566]: Found vda1 May 13 10:01:44.543925 extend-filesystems[1566]: Found vda2 May 13 10:01:44.543925 extend-filesystems[1566]: Found vda3 May 13 10:01:44.543925 extend-filesystems[1566]: Found usr May 13 10:01:44.543925 extend-filesystems[1566]: Found vda4 May 13 10:01:44.543925 extend-filesystems[1566]: Found vda6 May 13 10:01:44.543925 extend-filesystems[1566]: Found vda7 May 13 10:01:44.543925 extend-filesystems[1566]: Found vda9 May 13 10:01:44.543925 extend-filesystems[1566]: Checking size of /dev/vda9 May 13 10:01:44.544390 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 10:01:44.544645 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 10:01:44.564585 jq[1580]: true May 13 10:01:44.551677 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 13 10:01:44.557943 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 13 10:01:44.568531 update_engine[1579]: I20250513 10:01:44.567675 1579 main.cc:92] Flatcar Update Engine starting May 13 10:01:44.574236 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 10:01:44.580368 extend-filesystems[1566]: Resized partition /dev/vda9 May 13 10:01:44.587051 jq[1590]: true May 13 10:01:44.591820 extend-filesystems[1604]: resize2fs 1.47.2 (1-Jan-2025) May 13 10:01:44.611484 tar[1584]: linux-amd64/helm May 13 10:01:44.613901 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 10:01:44.627576 dbus-daemon[1562]: [system] SELinux support is enabled May 13 10:01:44.655140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:01:44.656569 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 10:01:44.680894 update_engine[1579]: I20250513 10:01:44.666450 1579 update_check_scheduler.cc:74] Next update check in 8m46s May 13 10:01:44.674615 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 10:01:44.674744 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 10:01:44.676350 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 10:01:44.676454 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 10:01:44.683686 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 10:01:44.685273 systemd[1]: Started update-engine.service - Update Engine. May 13 10:01:44.692173 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 10:01:44.713940 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 10:01:44.713940 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 10:01:44.713940 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 10:01:44.711347 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 10:01:44.717978 extend-filesystems[1566]: Resized filesystem in /dev/vda9 May 13 10:01:44.711632 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 10:01:44.726676 bash[1624]: Updated "/home/core/.ssh/authorized_keys" May 13 10:01:44.728325 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 10:01:44.732404 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 10:01:44.754259 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (Power Button) May 13 10:01:44.754292 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 10:01:44.756354 systemd-logind[1575]: New seat seat0. May 13 10:01:44.758573 systemd[1]: Started systemd-logind.service - User Login Management. May 13 10:01:44.771521 kernel: kvm_amd: TSC scaling supported May 13 10:01:44.771579 kernel: kvm_amd: Nested Virtualization enabled May 13 10:01:44.771593 kernel: kvm_amd: Nested Paging enabled May 13 10:01:44.771605 kernel: kvm_amd: LBR virtualization supported May 13 10:01:44.773387 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 10:01:44.773491 kernel: kvm_amd: Virtual GIF supported May 13 10:01:44.847102 kernel: EDAC MC: Ver: 3.0.0 May 13 10:01:44.875905 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 10:01:44.882371 containerd[1591]: time="2025-05-13T10:01:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 10:01:44.885673 containerd[1591]: time="2025-05-13T10:01:44.885644943Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 10:01:44.894600 containerd[1591]: time="2025-05-13T10:01:44.894568691Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.256µs" May 13 10:01:44.894600 containerd[1591]: time="2025-05-13T10:01:44.894596744Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 10:01:44.894656 containerd[1591]: time="2025-05-13T10:01:44.894612333Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 10:01:44.894811 containerd[1591]: time="2025-05-13T10:01:44.894791739Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 10:01:44.894835 containerd[1591]: time="2025-05-13T10:01:44.894811416Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 10:01:44.894854 containerd[1591]: time="2025-05-13T10:01:44.894832285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 10:01:44.894927 containerd[1591]: time="2025-05-13T10:01:44.894909330Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 10:01:44.894961 containerd[1591]: time="2025-05-13T10:01:44.894925610Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 10:01:44.895228 containerd[1591]: time="2025-05-13T10:01:44.895207038Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 10:01:44.895253 containerd[1591]: time="2025-05-13T10:01:44.895227948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 10:01:44.895253 containerd[1591]: time="2025-05-13T10:01:44.895238046Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 10:01:44.895253 containerd[1591]: time="2025-05-13T10:01:44.895245470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 10:01:44.895360 containerd[1591]: time="2025-05-13T10:01:44.895341981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 10:01:44.895612 containerd[1591]: time="2025-05-13T10:01:44.895592091Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 10:01:44.895646 containerd[1591]: time="2025-05-13T10:01:44.895629451Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 10:01:44.895669 containerd[1591]: time="2025-05-13T10:01:44.895645381Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 10:01:44.895695 containerd[1591]: time="2025-05-13T10:01:44.895667432Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 10:01:44.895857 containerd[1591]: time="2025-05-13T10:01:44.895839905Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 10:01:44.896231 containerd[1591]: time="2025-05-13T10:01:44.896213025Z" level=info msg="metadata content store policy set" policy=shared May 13 10:01:44.902104 containerd[1591]: time="2025-05-13T10:01:44.902050745Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 10:01:44.902104 containerd[1591]: time="2025-05-13T10:01:44.902096050Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 10:01:44.902155 containerd[1591]: time="2025-05-13T10:01:44.902109435Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 10:01:44.902203 containerd[1591]: time="2025-05-13T10:01:44.902187331Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 10:01:44.902235 containerd[1591]: time="2025-05-13T10:01:44.902206177Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 10:01:44.902235 containerd[1591]: time="2025-05-13T10:01:44.902216356Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 10:01:44.902235 containerd[1591]: time="2025-05-13T10:01:44.902227517Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 10:01:44.902291 containerd[1591]: time="2025-05-13T10:01:44.902237856Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 10:01:44.902291 containerd[1591]: time="2025-05-13T10:01:44.902247083Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 10:01:44.902291 containerd[1591]: time="2025-05-13T10:01:44.902255850Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 10:01:44.902291 containerd[1591]: time="2025-05-13T10:01:44.902264316Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 10:01:44.902291 containerd[1591]: time="2025-05-13T10:01:44.902274815Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902384521Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902404348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902416962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902426861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902436639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902446878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902456566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902465874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902476524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902486032Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902513062Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902578655Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902591399Z" level=info msg="Start snapshots syncer" May 13 10:01:44.903882 containerd[1591]: time="2025-05-13T10:01:44.902630312Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 10:01:44.904137 containerd[1591]: time="2025-05-13T10:01:44.902898565Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 10:01:44.904137 containerd[1591]: time="2025-05-13T10:01:44.902946816Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903743110Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903891147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903909993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903920041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903931463Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903942393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903951921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903961670Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.903989552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.904000813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 10:01:44.904261 containerd[1591]: time="2025-05-13T10:01:44.904013346Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 10:01:44.904772 containerd[1591]: time="2025-05-13T10:01:44.904754056Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 10:01:44.904799 containerd[1591]: time="2025-05-13T10:01:44.904778532Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 10:01:44.904799 containerd[1591]: time="2025-05-13T10:01:44.904787138Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 10:01:44.904887 containerd[1591]: time="2025-05-13T10:01:44.904857560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 10:01:44.904921 containerd[1591]: time="2025-05-13T10:01:44.904886715Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 10:01:44.904921 containerd[1591]: time="2025-05-13T10:01:44.904896834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 10:01:44.904921 containerd[1591]: time="2025-05-13T10:01:44.904905991Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 10:01:44.904973 containerd[1591]: time="2025-05-13T10:01:44.904922772Z" level=info msg="runtime interface created" May 13 10:01:44.904973 containerd[1591]: time="2025-05-13T10:01:44.904927892Z" level=info msg="created NRI interface" May 13 10:01:44.904973 containerd[1591]: time="2025-05-13T10:01:44.904935276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 10:01:44.904973 containerd[1591]: time="2025-05-13T10:01:44.904944974Z" level=info msg="Connect containerd service" May 13 10:01:44.904973 containerd[1591]: time="2025-05-13T10:01:44.904967085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 10:01:44.905996 containerd[1591]: time="2025-05-13T10:01:44.905973383Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 10:01:44.907686 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 10:01:44.934364 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 10:01:44.951096 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 10:01:44.962153 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:44.973121 systemd[1]: issuegen.service: Deactivated successfully. May 13 10:01:44.973415 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 10:01:44.977037 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 10:01:44.997250 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 10:01:44.999352 containerd[1591]: time="2025-05-13T10:01:44.999274549Z" level=info msg="Start subscribing containerd event" May 13 10:01:44.999410 containerd[1591]: time="2025-05-13T10:01:44.999352796Z" level=info msg="Start recovering state" May 13 10:01:44.999519 containerd[1591]: time="2025-05-13T10:01:44.999473072Z" level=info msg="Start event monitor" May 13 10:01:44.999519 containerd[1591]: time="2025-05-13T10:01:44.999502006Z" level=info msg="Start cni network conf syncer for default" May 13 10:01:44.999519 containerd[1591]: time="2025-05-13T10:01:44.999510492Z" level=info msg="Start streaming server" May 13 10:01:44.999519 containerd[1591]: time="2025-05-13T10:01:44.999521623Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 10:01:44.999615 containerd[1591]: time="2025-05-13T10:01:44.999530329Z" level=info msg="runtime interface starting up..." May 13 10:01:44.999615 containerd[1591]: time="2025-05-13T10:01:44.999537272Z" level=info msg="starting plugins..." May 13 10:01:44.999615 containerd[1591]: time="2025-05-13T10:01:44.999551899Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 10:01:45.000039 containerd[1591]: time="2025-05-13T10:01:44.999990302Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 10:01:45.000190 containerd[1591]: time="2025-05-13T10:01:45.000138560Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 10:01:45.001025 containerd[1591]: time="2025-05-13T10:01:45.000976744Z" level=info msg="containerd successfully booted in 0.119137s" May 13 10:01:45.001255 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 10:01:45.005108 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 10:01:45.006423 systemd[1]: Reached target getty.target - Login Prompts. May 13 10:01:45.007811 systemd[1]: Started containerd.service - containerd container runtime. May 13 10:01:45.161817 tar[1584]: linux-amd64/LICENSE May 13 10:01:45.161925 tar[1584]: linux-amd64/README.md May 13 10:01:45.192143 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 10:01:46.252040 systemd-networkd[1496]: eth0: Gained IPv6LL May 13 10:01:46.255267 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 10:01:46.257238 systemd[1]: Reached target network-online.target - Network is Online. May 13 10:01:46.260076 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 10:01:46.262646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:01:46.265023 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 10:01:46.300132 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 10:01:46.303373 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 10:01:46.303680 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 10:01:46.305356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 10:01:46.902665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:01:46.904324 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 10:01:46.905699 systemd[1]: Startup finished in 3.009s (kernel) + 5.708s (initrd) + 4.605s (userspace) = 13.323s. May 13 10:01:46.951289 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:01:47.387820 kubelet[1703]: E0513 10:01:47.387684 1703 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:01:47.391796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:01:47.392068 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:01:47.392540 systemd[1]: kubelet.service: Consumed 931ms CPU time, 242.9M memory peak. May 13 10:01:50.384542 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 10:01:50.386008 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:57780.service - OpenSSH per-connection server daemon (10.0.0.1:57780). May 13 10:01:50.461640 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 57780 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:01:50.463997 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:50.471558 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 10:01:50.472839 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 10:01:50.479725 systemd-logind[1575]: New session 1 of user core. May 13 10:01:50.497571 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 10:01:50.501017 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 10:01:50.519497 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 10:01:50.522464 systemd-logind[1575]: New session c1 of user core. May 13 10:01:50.683021 systemd[1722]: Queued start job for default target default.target. May 13 10:01:50.704862 systemd[1722]: Created slice app.slice - User Application Slice. May 13 10:01:50.704922 systemd[1722]: Reached target paths.target - Paths. May 13 10:01:50.704982 systemd[1722]: Reached target timers.target - Timers. May 13 10:01:50.707175 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 10:01:50.722259 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 10:01:50.722443 systemd[1722]: Reached target sockets.target - Sockets. May 13 10:01:50.722502 systemd[1722]: Reached target basic.target - Basic System. May 13 10:01:50.722546 systemd[1722]: Reached target default.target - Main User Target. May 13 10:01:50.722588 systemd[1722]: Startup finished in 193ms. May 13 10:01:50.722733 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 10:01:50.724528 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 10:01:50.796569 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). May 13 10:01:50.859999 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:01:50.861398 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:50.866318 systemd-logind[1575]: New session 2 of user core. May 13 10:01:50.884022 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 10:01:50.938046 sshd[1735]: Connection closed by 10.0.0.1 port 57786 May 13 10:01:50.938514 sshd-session[1733]: pam_unix(sshd:session): session closed for user core May 13 10:01:50.950508 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:57786.service: Deactivated successfully. May 13 10:01:50.952441 systemd[1]: session-2.scope: Deactivated successfully. May 13 10:01:50.953374 systemd-logind[1575]: Session 2 logged out. Waiting for processes to exit. May 13 10:01:50.956530 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:57802.service - OpenSSH per-connection server daemon (10.0.0.1:57802). May 13 10:01:50.957516 systemd-logind[1575]: Removed session 2. May 13 10:01:51.010665 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 57802 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:01:51.012375 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:51.017076 systemd-logind[1575]: New session 3 of user core. May 13 10:01:51.030015 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 10:01:51.080597 sshd[1743]: Connection closed by 10.0.0.1 port 57802 May 13 10:01:51.080999 sshd-session[1741]: pam_unix(sshd:session): session closed for user core May 13 10:01:51.089693 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:57802.service: Deactivated successfully. May 13 10:01:51.091551 systemd[1]: session-3.scope: Deactivated successfully. May 13 10:01:51.092409 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. May 13 10:01:51.095261 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:57808.service - OpenSSH per-connection server daemon (10.0.0.1:57808). May 13 10:01:51.095904 systemd-logind[1575]: Removed session 3. May 13 10:01:51.157507 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 57808 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:01:51.159251 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:51.164249 systemd-logind[1575]: New session 4 of user core. May 13 10:01:51.174025 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 10:01:51.227582 sshd[1751]: Connection closed by 10.0.0.1 port 57808 May 13 10:01:51.227921 sshd-session[1749]: pam_unix(sshd:session): session closed for user core May 13 10:01:51.239226 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:57808.service: Deactivated successfully. May 13 10:01:51.241752 systemd[1]: session-4.scope: Deactivated successfully. May 13 10:01:51.242596 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. May 13 10:01:51.246021 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:57824.service - OpenSSH per-connection server daemon (10.0.0.1:57824). May 13 10:01:51.246721 systemd-logind[1575]: Removed session 4. May 13 10:01:51.312008 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 57824 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:01:51.313903 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:51.319070 systemd-logind[1575]: New session 5 of user core. May 13 10:01:51.328086 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 10:01:51.387796 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 10:01:51.388203 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:01:51.409203 sudo[1760]: pam_unix(sudo:session): session closed for user root May 13 10:01:51.411392 sshd[1759]: Connection closed by 10.0.0.1 port 57824 May 13 10:01:51.411769 sshd-session[1757]: pam_unix(sshd:session): session closed for user core May 13 10:01:51.428123 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:57824.service: Deactivated successfully. May 13 10:01:51.430256 systemd[1]: session-5.scope: Deactivated successfully. May 13 10:01:51.431135 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. May 13 10:01:51.434127 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:57832.service - OpenSSH per-connection server daemon (10.0.0.1:57832). May 13 10:01:51.434669 systemd-logind[1575]: Removed session 5. May 13 10:01:51.489946 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 57832 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:01:51.492008 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:51.496968 systemd-logind[1575]: New session 6 of user core. May 13 10:01:51.506071 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 10:01:51.562182 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 10:01:51.562519 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:01:51.817744 sudo[1771]: pam_unix(sudo:session): session closed for user root May 13 10:01:51.825566 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 10:01:51.825981 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:01:51.837628 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:01:51.885584 augenrules[1793]: No rules May 13 10:01:51.887539 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:01:51.887827 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:01:51.889210 sudo[1770]: pam_unix(sudo:session): session closed for user root May 13 10:01:51.890863 sshd[1769]: Connection closed by 10.0.0.1 port 57832 May 13 10:01:51.891222 sshd-session[1766]: pam_unix(sshd:session): session closed for user core May 13 10:01:51.905215 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:57832.service: Deactivated successfully. May 13 10:01:51.907164 systemd[1]: session-6.scope: Deactivated successfully. May 13 10:01:51.908012 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. May 13 10:01:51.911243 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:57840.service - OpenSSH per-connection server daemon (10.0.0.1:57840). May 13 10:01:51.912068 systemd-logind[1575]: Removed session 6. May 13 10:01:51.975032 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 57840 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:01:51.976670 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:51.981480 systemd-logind[1575]: New session 7 of user core. May 13 10:01:51.991074 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 10:01:52.045470 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 10:01:52.045793 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:01:52.487798 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 10:01:52.506340 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 10:01:52.942909 dockerd[1826]: time="2025-05-13T10:01:52.942715474Z" level=info msg="Starting up" May 13 10:01:52.943730 dockerd[1826]: time="2025-05-13T10:01:52.943684145Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 10:01:53.022746 dockerd[1826]: time="2025-05-13T10:01:53.022695073Z" level=info msg="Loading containers: start." May 13 10:01:53.032896 kernel: Initializing XFRM netlink socket May 13 10:01:53.427293 systemd-networkd[1496]: docker0: Link UP May 13 10:01:53.433410 dockerd[1826]: time="2025-05-13T10:01:53.433361239Z" level=info msg="Loading containers: done." May 13 10:01:53.453852 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck162425232-merged.mount: Deactivated successfully. May 13 10:01:53.455646 dockerd[1826]: time="2025-05-13T10:01:53.455591599Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 10:01:53.455713 dockerd[1826]: time="2025-05-13T10:01:53.455689129Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 10:01:53.455824 dockerd[1826]: time="2025-05-13T10:01:53.455805001Z" level=info msg="Initializing buildkit" May 13 10:01:53.488120 dockerd[1826]: time="2025-05-13T10:01:53.488068334Z" level=info msg="Completed buildkit initialization" May 13 10:01:53.525941 dockerd[1826]: time="2025-05-13T10:01:53.525839746Z" level=info msg="Daemon has completed initialization" May 13 10:01:53.526100 dockerd[1826]: time="2025-05-13T10:01:53.525961906Z" level=info msg="API listen on /run/docker.sock" May 13 10:01:53.526151 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 10:01:55.191052 containerd[1591]: time="2025-05-13T10:01:55.190977634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 10:01:55.854452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556405092.mount: Deactivated successfully. May 13 10:01:57.494033 containerd[1591]: time="2025-05-13T10:01:57.493960524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:57.494807 containerd[1591]: time="2025-05-13T10:01:57.494748836Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 13 10:01:57.496004 containerd[1591]: time="2025-05-13T10:01:57.495973300Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:57.498697 containerd[1591]: time="2025-05-13T10:01:57.498651642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:57.499750 containerd[1591]: time="2025-05-13T10:01:57.499687986Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.308656162s" May 13 10:01:57.499799 containerd[1591]: time="2025-05-13T10:01:57.499756333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 10:01:57.557119 containerd[1591]: time="2025-05-13T10:01:57.557074557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 10:01:57.642504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 10:01:57.644363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:01:57.891687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:01:57.916392 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:01:58.084732 kubelet[2114]: E0513 10:01:58.084607 2114 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:01:58.092294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:01:58.092522 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:01:58.092958 systemd[1]: kubelet.service: Consumed 283ms CPU time, 100.9M memory peak. May 13 10:01:59.872889 containerd[1591]: time="2025-05-13T10:01:59.872794118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:59.873658 containerd[1591]: time="2025-05-13T10:01:59.873584109Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 13 10:01:59.874805 containerd[1591]: time="2025-05-13T10:01:59.874693721Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:59.877688 containerd[1591]: time="2025-05-13T10:01:59.877637201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:59.878607 containerd[1591]: time="2025-05-13T10:01:59.878560635Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.321444054s" May 13 10:01:59.878607 containerd[1591]: time="2025-05-13T10:01:59.878602921Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 10:01:59.906518 containerd[1591]: time="2025-05-13T10:01:59.906465944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 10:02:01.238100 containerd[1591]: time="2025-05-13T10:02:01.238044633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:01.238998 containerd[1591]: time="2025-05-13T10:02:01.238967849Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 13 10:02:01.240112 containerd[1591]: time="2025-05-13T10:02:01.240072783Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:01.242420 containerd[1591]: time="2025-05-13T10:02:01.242365483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:01.243279 containerd[1591]: time="2025-05-13T10:02:01.243243029Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.336740082s" May 13 10:02:01.243279 containerd[1591]: time="2025-05-13T10:02:01.243273593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 10:02:01.268762 containerd[1591]: time="2025-05-13T10:02:01.268712782Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 10:02:02.444673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166080279.mount: Deactivated successfully. May 13 10:02:03.347375 containerd[1591]: time="2025-05-13T10:02:03.347306197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:03.355397 containerd[1591]: time="2025-05-13T10:02:03.355347964Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 13 10:02:03.368802 containerd[1591]: time="2025-05-13T10:02:03.368699637Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:03.372497 containerd[1591]: time="2025-05-13T10:02:03.372444759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:03.372985 containerd[1591]: time="2025-05-13T10:02:03.372946549Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.104198637s" May 13 10:02:03.372985 containerd[1591]: time="2025-05-13T10:02:03.372976537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 10:02:03.399596 containerd[1591]: time="2025-05-13T10:02:03.399544772Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 10:02:03.909327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773925741.mount: Deactivated successfully. May 13 10:02:04.824222 containerd[1591]: time="2025-05-13T10:02:04.824127733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:04.825006 containerd[1591]: time="2025-05-13T10:02:04.824947690Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 10:02:04.827058 containerd[1591]: time="2025-05-13T10:02:04.827018865Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:04.832527 containerd[1591]: time="2025-05-13T10:02:04.832464103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:04.833406 containerd[1591]: time="2025-05-13T10:02:04.833365680Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.433778884s" May 13 10:02:04.833406 containerd[1591]: time="2025-05-13T10:02:04.833395130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 10:02:04.943243 containerd[1591]: time="2025-05-13T10:02:04.943174949Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 10:02:06.186350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068479356.mount: Deactivated successfully. May 13 10:02:06.194070 containerd[1591]: time="2025-05-13T10:02:06.194025617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:06.194898 containerd[1591]: time="2025-05-13T10:02:06.194854768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 13 10:02:06.196531 containerd[1591]: time="2025-05-13T10:02:06.196499820Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:06.198500 containerd[1591]: time="2025-05-13T10:02:06.198463163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:06.199069 containerd[1591]: time="2025-05-13T10:02:06.199042576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.255809188s" May 13 10:02:06.199103 containerd[1591]: time="2025-05-13T10:02:06.199070107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 10:02:06.228767 containerd[1591]: time="2025-05-13T10:02:06.228725556Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 10:02:06.798551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016664749.mount: Deactivated successfully. May 13 10:02:08.342937 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 10:02:08.345916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:09.065987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:09.069900 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:02:09.280881 kubelet[2285]: E0513 10:02:09.280774 2285 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:02:09.285398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:02:09.285647 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:02:09.286250 systemd[1]: kubelet.service: Consumed 280ms CPU time, 95.6M memory peak. May 13 10:02:09.398275 containerd[1591]: time="2025-05-13T10:02:09.398089885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:09.399186 containerd[1591]: time="2025-05-13T10:02:09.399128863Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 13 10:02:09.400718 containerd[1591]: time="2025-05-13T10:02:09.400673684Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:09.403573 containerd[1591]: time="2025-05-13T10:02:09.403513665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:09.404733 containerd[1591]: time="2025-05-13T10:02:09.404689613Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.175921985s" May 13 10:02:09.404733 containerd[1591]: time="2025-05-13T10:02:09.404731531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 10:02:12.455498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:12.455714 systemd[1]: kubelet.service: Consumed 280ms CPU time, 95.6M memory peak. May 13 10:02:12.458255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:12.479816 systemd[1]: Reload requested from client PID 2396 ('systemctl') (unit session-7.scope)... May 13 10:02:12.479834 systemd[1]: Reloading... May 13 10:02:12.582970 zram_generator::config[2446]: No configuration found. May 13 10:02:12.777569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:02:12.894288 systemd[1]: Reloading finished in 414 ms. May 13 10:02:12.956131 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 10:02:12.956232 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 10:02:12.956532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:12.956602 systemd[1]: kubelet.service: Consumed 138ms CPU time, 83.6M memory peak. May 13 10:02:12.959153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:13.129080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:13.143172 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 10:02:13.186675 kubelet[2489]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:02:13.186675 kubelet[2489]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 10:02:13.186675 kubelet[2489]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:02:13.187129 kubelet[2489]: I0513 10:02:13.186701 2489 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 10:02:13.500254 kubelet[2489]: I0513 10:02:13.500118 2489 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 10:02:13.500254 kubelet[2489]: I0513 10:02:13.500153 2489 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 10:02:13.500617 kubelet[2489]: I0513 10:02:13.500590 2489 server.go:927] "Client rotation is on, will bootstrap in background" May 13 10:02:13.521608 kubelet[2489]: I0513 10:02:13.521541 2489 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:02:13.522302 kubelet[2489]: E0513 10:02:13.522270 2489 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.536464 kubelet[2489]: I0513 10:02:13.536416 2489 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 10:02:13.539342 kubelet[2489]: I0513 10:02:13.539292 2489 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 10:02:13.539529 kubelet[2489]: I0513 10:02:13.539336 2489 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 10:02:13.539632 kubelet[2489]: I0513 10:02:13.539546 2489 topology_manager.go:138] "Creating topology manager with none policy" May 13 10:02:13.539632 kubelet[2489]: I0513 10:02:13.539557 2489 container_manager_linux.go:301] "Creating device plugin manager" May 13 10:02:13.539737 kubelet[2489]: I0513 10:02:13.539719 2489 state_mem.go:36] "Initialized new in-memory state store" May 13 10:02:13.540681 kubelet[2489]: I0513 10:02:13.540657 2489 kubelet.go:400] "Attempting to sync node with API server" May 13 10:02:13.540681 kubelet[2489]: I0513 10:02:13.540679 2489 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 10:02:13.540738 kubelet[2489]: I0513 10:02:13.540702 2489 kubelet.go:312] "Adding apiserver pod source" May 13 10:02:13.540738 kubelet[2489]: I0513 10:02:13.540717 2489 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 10:02:13.541233 kubelet[2489]: W0513 10:02:13.541191 2489 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.541233 kubelet[2489]: E0513 10:02:13.541233 2489 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.542969 kubelet[2489]: W0513 10:02:13.542926 2489 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.542969 kubelet[2489]: E0513 10:02:13.542964 2489 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.545273 kubelet[2489]: I0513 10:02:13.545224 2489 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 10:02:13.547013 kubelet[2489]: I0513 10:02:13.546982 2489 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 10:02:13.547153 kubelet[2489]: W0513 10:02:13.547138 2489 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 10:02:13.548043 kubelet[2489]: I0513 10:02:13.548025 2489 server.go:1264] "Started kubelet" May 13 10:02:13.554898 kubelet[2489]: I0513 10:02:13.553603 2489 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 10:02:13.554898 kubelet[2489]: I0513 10:02:13.554142 2489 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 10:02:13.554898 kubelet[2489]: I0513 10:02:13.554582 2489 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 10:02:13.554898 kubelet[2489]: I0513 10:02:13.554629 2489 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 10:02:13.554898 kubelet[2489]: I0513 10:02:13.554669 2489 server.go:455] "Adding debug handlers to kubelet server" May 13 10:02:13.555397 kubelet[2489]: E0513 10:02:13.555286 2489 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f0df6ac5e0881 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 10:02:13.548001409 +0000 UTC m=+0.400502960,LastTimestamp:2025-05-13 10:02:13.548001409 +0000 UTC m=+0.400502960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 10:02:13.555781 kubelet[2489]: I0513 10:02:13.555770 2489 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 10:02:13.556230 kubelet[2489]: I0513 10:02:13.556215 2489 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 10:02:13.556345 kubelet[2489]: I0513 10:02:13.556334 2489 reconciler.go:26] "Reconciler: start to sync state" May 13 10:02:13.556754 kubelet[2489]: W0513 10:02:13.556713 2489 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.556832 kubelet[2489]: E0513 10:02:13.556821 2489 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.557262 kubelet[2489]: E0513 10:02:13.557223 2489 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" May 13 10:02:13.557550 kubelet[2489]: E0513 10:02:13.557525 2489 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 10:02:13.557645 kubelet[2489]: I0513 10:02:13.557622 2489 factory.go:221] Registration of the systemd container factory successfully May 13 10:02:13.557767 kubelet[2489]: I0513 10:02:13.557726 2489 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 10:02:13.558713 kubelet[2489]: I0513 10:02:13.558694 2489 factory.go:221] Registration of the containerd container factory successfully May 13 10:02:13.575097 kubelet[2489]: I0513 10:02:13.575058 2489 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 10:02:13.575097 kubelet[2489]: I0513 10:02:13.575072 2489 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 10:02:13.575097 kubelet[2489]: I0513 10:02:13.575101 2489 state_mem.go:36] "Initialized new in-memory state store" May 13 10:02:13.575340 kubelet[2489]: I0513 10:02:13.575297 2489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 10:02:13.576796 kubelet[2489]: I0513 10:02:13.576773 2489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 10:02:13.576850 kubelet[2489]: I0513 10:02:13.576810 2489 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 10:02:13.576987 kubelet[2489]: I0513 10:02:13.576966 2489 kubelet.go:2337] "Starting kubelet main sync loop" May 13 10:02:13.577028 kubelet[2489]: E0513 10:02:13.577011 2489 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 10:02:13.577733 kubelet[2489]: W0513 10:02:13.577686 2489 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.577797 kubelet[2489]: E0513 10:02:13.577745 2489 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:13.657956 kubelet[2489]: I0513 10:02:13.657924 2489 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 10:02:13.658245 kubelet[2489]: E0513 10:02:13.658220 2489 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 13 10:02:13.677391 kubelet[2489]: E0513 10:02:13.677362 2489 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 10:02:13.758469 kubelet[2489]: E0513 10:02:13.758247 2489 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" May 13 10:02:13.859669 kubelet[2489]: I0513 10:02:13.859622 2489 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 10:02:13.860025 kubelet[2489]: E0513 10:02:13.859988 2489 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 13 10:02:13.878189 kubelet[2489]: E0513 10:02:13.878126 2489 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 10:02:13.955471 kubelet[2489]: I0513 10:02:13.955404 2489 policy_none.go:49] "None policy: Start" May 13 10:02:13.956830 kubelet[2489]: I0513 10:02:13.956654 2489 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 10:02:13.956830 kubelet[2489]: I0513 10:02:13.956685 2489 state_mem.go:35] "Initializing new in-memory state store" May 13 10:02:13.964919 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 10:02:13.978221 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 10:02:13.981612 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 10:02:13.996909 kubelet[2489]: I0513 10:02:13.996754 2489 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 10:02:13.997073 kubelet[2489]: I0513 10:02:13.997029 2489 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 10:02:13.997176 kubelet[2489]: I0513 10:02:13.997157 2489 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 10:02:13.998557 kubelet[2489]: E0513 10:02:13.998530 2489 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 10:02:14.159434 kubelet[2489]: E0513 10:02:14.159259 2489 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" May 13 10:02:14.262059 kubelet[2489]: I0513 10:02:14.262008 2489 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 10:02:14.262480 kubelet[2489]: E0513 10:02:14.262358 2489 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 13 10:02:14.278520 kubelet[2489]: I0513 10:02:14.278469 2489 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 10:02:14.279391 kubelet[2489]: I0513 10:02:14.279358 2489 topology_manager.go:215] "Topology Admit Handler" podUID="8e648910b66e4ee959fa3978e33a51f8" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 10:02:14.280353 kubelet[2489]: I0513 10:02:14.280319 2489 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 10:02:14.286129 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 10:02:14.317712 systemd[1]: Created slice kubepods-burstable-pod8e648910b66e4ee959fa3978e33a51f8.slice - libcontainer container kubepods-burstable-pod8e648910b66e4ee959fa3978e33a51f8.slice. May 13 10:02:14.332508 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 10:02:14.361628 kubelet[2489]: I0513 10:02:14.361591 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:14.361628 kubelet[2489]: I0513 10:02:14.361632 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e648910b66e4ee959fa3978e33a51f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e648910b66e4ee959fa3978e33a51f8\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:14.361768 kubelet[2489]: I0513 10:02:14.361654 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:14.361768 kubelet[2489]: I0513 10:02:14.361673 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:14.361768 kubelet[2489]: I0513 10:02:14.361695 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:14.361768 kubelet[2489]: I0513 10:02:14.361734 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:14.361768 kubelet[2489]: I0513 10:02:14.361765 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 10:02:14.361898 kubelet[2489]: I0513 10:02:14.361779 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e648910b66e4ee959fa3978e33a51f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e648910b66e4ee959fa3978e33a51f8\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:14.361898 kubelet[2489]: I0513 10:02:14.361793 2489 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e648910b66e4ee959fa3978e33a51f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e648910b66e4ee959fa3978e33a51f8\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:14.439429 kubelet[2489]: W0513 10:02:14.439246 2489 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:14.439429 kubelet[2489]: E0513 10:02:14.439340 2489 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:14.612898 kubelet[2489]: E0513 10:02:14.612820 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:14.613498 containerd[1591]: time="2025-05-13T10:02:14.613453904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 10:02:14.629773 kubelet[2489]: E0513 10:02:14.629725 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:14.630222 containerd[1591]: time="2025-05-13T10:02:14.630169885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e648910b66e4ee959fa3978e33a51f8,Namespace:kube-system,Attempt:0,}" May 13 10:02:14.635426 kubelet[2489]: E0513 10:02:14.635381 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:14.635674 containerd[1591]: time="2025-05-13T10:02:14.635646460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 10:02:14.678654 kubelet[2489]: W0513 10:02:14.678605 2489 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:14.678654 kubelet[2489]: E0513 10:02:14.678656 2489 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:14.679924 kubelet[2489]: W0513 10:02:14.679865 2489 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:14.679968 kubelet[2489]: E0513 10:02:14.679927 2489 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:14.960583 kubelet[2489]: E0513 10:02:14.960533 2489 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" May 13 10:02:14.984148 kubelet[2489]: W0513 10:02:14.984094 2489 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:14.984148 kubelet[2489]: E0513 10:02:14.984149 2489 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:15.064347 kubelet[2489]: I0513 10:02:15.064318 2489 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 10:02:15.064778 kubelet[2489]: E0513 10:02:15.064720 2489 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 13 10:02:15.302027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783061621.mount: Deactivated successfully. May 13 10:02:15.307209 containerd[1591]: time="2025-05-13T10:02:15.307161544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:02:15.310891 containerd[1591]: time="2025-05-13T10:02:15.310816457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 10:02:15.311693 containerd[1591]: time="2025-05-13T10:02:15.311649840Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:02:15.312733 containerd[1591]: time="2025-05-13T10:02:15.312690986Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:02:15.313650 containerd[1591]: time="2025-05-13T10:02:15.313623653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 10:02:15.314768 containerd[1591]: time="2025-05-13T10:02:15.314715779Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:02:15.315785 containerd[1591]: time="2025-05-13T10:02:15.315741779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 10:02:15.317956 containerd[1591]: time="2025-05-13T10:02:15.317923548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:02:15.318523 containerd[1591]: time="2025-05-13T10:02:15.318491865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 493.491171ms" May 13 10:02:15.319068 containerd[1591]: time="2025-05-13T10:02:15.319034808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.611606ms" May 13 10:02:15.321536 containerd[1591]: time="2025-05-13T10:02:15.321504601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.315142ms" May 13 10:02:15.345906 containerd[1591]: time="2025-05-13T10:02:15.345369375Z" level=info msg="connecting to shim 6f542912303d35068d22f8443870f4e29ab0249a132da4eb1bd3138d4cf6196a" address="unix:///run/containerd/s/7f752e088edd01ac33a9d644bfca5e442fcbf031cf45847e19304a43a5469405" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:15.356911 containerd[1591]: time="2025-05-13T10:02:15.356842256Z" level=info msg="connecting to shim e704456564761f3d8e686a311dbbc625eaf5ee1e5f36d35baa05fe7454b1833f" address="unix:///run/containerd/s/0c3a8fb78d33d9b5843defb0f8860b88cac29f3bc8aa1c0583c27933d0a496bc" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:15.357435 containerd[1591]: time="2025-05-13T10:02:15.357408710Z" level=info msg="connecting to shim 37a14d9b7a55ef4a1c79074e06507b9b3905786cc7d0a43b43955d634d830960" address="unix:///run/containerd/s/a8cf402358e6b9ca1acd749eeae3b94c8932bb2efd5b0abf617d56906a1d520c" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:15.388092 systemd[1]: Started cri-containerd-6f542912303d35068d22f8443870f4e29ab0249a132da4eb1bd3138d4cf6196a.scope - libcontainer container 6f542912303d35068d22f8443870f4e29ab0249a132da4eb1bd3138d4cf6196a. May 13 10:02:15.406012 systemd[1]: Started cri-containerd-37a14d9b7a55ef4a1c79074e06507b9b3905786cc7d0a43b43955d634d830960.scope - libcontainer container 37a14d9b7a55ef4a1c79074e06507b9b3905786cc7d0a43b43955d634d830960. May 13 10:02:15.407502 systemd[1]: Started cri-containerd-e704456564761f3d8e686a311dbbc625eaf5ee1e5f36d35baa05fe7454b1833f.scope - libcontainer container e704456564761f3d8e686a311dbbc625eaf5ee1e5f36d35baa05fe7454b1833f. May 13 10:02:15.458589 containerd[1591]: time="2025-05-13T10:02:15.458535309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f542912303d35068d22f8443870f4e29ab0249a132da4eb1bd3138d4cf6196a\"" May 13 10:02:15.460277 kubelet[2489]: E0513 10:02:15.460249 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:15.464008 containerd[1591]: time="2025-05-13T10:02:15.463956963Z" level=info msg="CreateContainer within sandbox \"6f542912303d35068d22f8443870f4e29ab0249a132da4eb1bd3138d4cf6196a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 10:02:15.464978 containerd[1591]: time="2025-05-13T10:02:15.464945007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e648910b66e4ee959fa3978e33a51f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"37a14d9b7a55ef4a1c79074e06507b9b3905786cc7d0a43b43955d634d830960\"" May 13 10:02:15.465595 kubelet[2489]: E0513 10:02:15.465398 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:15.467394 containerd[1591]: time="2025-05-13T10:02:15.467361457Z" level=info msg="CreateContainer within sandbox \"37a14d9b7a55ef4a1c79074e06507b9b3905786cc7d0a43b43955d634d830960\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 10:02:15.469857 containerd[1591]: time="2025-05-13T10:02:15.469820671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"e704456564761f3d8e686a311dbbc625eaf5ee1e5f36d35baa05fe7454b1833f\"" May 13 10:02:15.470462 kubelet[2489]: E0513 10:02:15.470347 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:15.471783 containerd[1591]: time="2025-05-13T10:02:15.471753493Z" level=info msg="CreateContainer within sandbox \"e704456564761f3d8e686a311dbbc625eaf5ee1e5f36d35baa05fe7454b1833f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 10:02:15.482186 containerd[1591]: time="2025-05-13T10:02:15.481937332Z" level=info msg="Container ef20099f0404146248337a99a4fbb2ec3873b8e7a4a0978fcc1edc6debb82311: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:15.486378 containerd[1591]: time="2025-05-13T10:02:15.486341840Z" level=info msg="Container fe9bd5d8740cfa8c4651f73eaa58603347d16892e12518d3d1738756ae327979: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:15.492079 containerd[1591]: time="2025-05-13T10:02:15.492041571Z" level=info msg="CreateContainer within sandbox \"6f542912303d35068d22f8443870f4e29ab0249a132da4eb1bd3138d4cf6196a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef20099f0404146248337a99a4fbb2ec3873b8e7a4a0978fcc1edc6debb82311\"" May 13 10:02:15.492923 containerd[1591]: time="2025-05-13T10:02:15.492268148Z" level=info msg="Container 3dfafa25941bf5e8ae03fa3e6528c9921e6b0ea39276ac104ce4734fcaae3b84: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:15.492923 containerd[1591]: time="2025-05-13T10:02:15.492687395Z" level=info msg="StartContainer for \"ef20099f0404146248337a99a4fbb2ec3873b8e7a4a0978fcc1edc6debb82311\"" May 13 10:02:15.493703 containerd[1591]: time="2025-05-13T10:02:15.493674016Z" level=info msg="connecting to shim ef20099f0404146248337a99a4fbb2ec3873b8e7a4a0978fcc1edc6debb82311" address="unix:///run/containerd/s/7f752e088edd01ac33a9d644bfca5e442fcbf031cf45847e19304a43a5469405" protocol=ttrpc version=3 May 13 10:02:15.496388 containerd[1591]: time="2025-05-13T10:02:15.496361531Z" level=info msg="CreateContainer within sandbox \"37a14d9b7a55ef4a1c79074e06507b9b3905786cc7d0a43b43955d634d830960\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe9bd5d8740cfa8c4651f73eaa58603347d16892e12518d3d1738756ae327979\"" May 13 10:02:15.496731 containerd[1591]: time="2025-05-13T10:02:15.496714170Z" level=info msg="StartContainer for \"fe9bd5d8740cfa8c4651f73eaa58603347d16892e12518d3d1738756ae327979\"" May 13 10:02:15.498200 containerd[1591]: time="2025-05-13T10:02:15.498178059Z" level=info msg="connecting to shim fe9bd5d8740cfa8c4651f73eaa58603347d16892e12518d3d1738756ae327979" address="unix:///run/containerd/s/a8cf402358e6b9ca1acd749eeae3b94c8932bb2efd5b0abf617d56906a1d520c" protocol=ttrpc version=3 May 13 10:02:15.500143 containerd[1591]: time="2025-05-13T10:02:15.500119495Z" level=info msg="CreateContainer within sandbox \"e704456564761f3d8e686a311dbbc625eaf5ee1e5f36d35baa05fe7454b1833f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3dfafa25941bf5e8ae03fa3e6528c9921e6b0ea39276ac104ce4734fcaae3b84\"" May 13 10:02:15.501319 containerd[1591]: time="2025-05-13T10:02:15.501255548Z" level=info msg="StartContainer for \"3dfafa25941bf5e8ae03fa3e6528c9921e6b0ea39276ac104ce4734fcaae3b84\"" May 13 10:02:15.502529 containerd[1591]: time="2025-05-13T10:02:15.502463158Z" level=info msg="connecting to shim 3dfafa25941bf5e8ae03fa3e6528c9921e6b0ea39276ac104ce4734fcaae3b84" address="unix:///run/containerd/s/0c3a8fb78d33d9b5843defb0f8860b88cac29f3bc8aa1c0583c27933d0a496bc" protocol=ttrpc version=3 May 13 10:02:15.513101 systemd[1]: Started cri-containerd-ef20099f0404146248337a99a4fbb2ec3873b8e7a4a0978fcc1edc6debb82311.scope - libcontainer container ef20099f0404146248337a99a4fbb2ec3873b8e7a4a0978fcc1edc6debb82311. May 13 10:02:15.516691 systemd[1]: Started cri-containerd-fe9bd5d8740cfa8c4651f73eaa58603347d16892e12518d3d1738756ae327979.scope - libcontainer container fe9bd5d8740cfa8c4651f73eaa58603347d16892e12518d3d1738756ae327979. May 13 10:02:15.527028 systemd[1]: Started cri-containerd-3dfafa25941bf5e8ae03fa3e6528c9921e6b0ea39276ac104ce4734fcaae3b84.scope - libcontainer container 3dfafa25941bf5e8ae03fa3e6528c9921e6b0ea39276ac104ce4734fcaae3b84. May 13 10:02:15.599928 containerd[1591]: time="2025-05-13T10:02:15.599775705Z" level=info msg="StartContainer for \"fe9bd5d8740cfa8c4651f73eaa58603347d16892e12518d3d1738756ae327979\" returns successfully" May 13 10:02:15.602896 containerd[1591]: time="2025-05-13T10:02:15.602842376Z" level=info msg="StartContainer for \"3dfafa25941bf5e8ae03fa3e6528c9921e6b0ea39276ac104ce4734fcaae3b84\" returns successfully" May 13 10:02:15.611086 containerd[1591]: time="2025-05-13T10:02:15.611018602Z" level=info msg="StartContainer for \"ef20099f0404146248337a99a4fbb2ec3873b8e7a4a0978fcc1edc6debb82311\" returns successfully" May 13 10:02:15.613662 kubelet[2489]: E0513 10:02:15.613622 2489 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.15:6443: connect: connection refused May 13 10:02:16.607258 kubelet[2489]: E0513 10:02:16.607207 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:16.611314 kubelet[2489]: E0513 10:02:16.611261 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:16.614763 kubelet[2489]: E0513 10:02:16.614723 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:16.668635 kubelet[2489]: I0513 10:02:16.668578 2489 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 10:02:17.045071 kubelet[2489]: E0513 10:02:17.044884 2489 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 10:02:17.152830 kubelet[2489]: I0513 10:02:17.152770 2489 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 10:02:17.169195 kubelet[2489]: E0513 10:02:17.169142 2489 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:17.269534 kubelet[2489]: E0513 10:02:17.269467 2489 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:17.370195 kubelet[2489]: E0513 10:02:17.370044 2489 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:17.470861 kubelet[2489]: E0513 10:02:17.470814 2489 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:17.571915 kubelet[2489]: E0513 10:02:17.571805 2489 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:17.625247 kubelet[2489]: E0513 10:02:17.625106 2489 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 10:02:17.625614 kubelet[2489]: E0513 10:02:17.625106 2489 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 10:02:17.625614 kubelet[2489]: E0513 10:02:17.625106 2489 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 10:02:17.625614 kubelet[2489]: E0513 10:02:17.625535 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:17.625614 kubelet[2489]: E0513 10:02:17.625580 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:17.625926 kubelet[2489]: E0513 10:02:17.625892 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:18.545336 kubelet[2489]: I0513 10:02:18.545275 2489 apiserver.go:52] "Watching apiserver" May 13 10:02:18.556679 kubelet[2489]: I0513 10:02:18.556652 2489 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 10:02:18.620218 kubelet[2489]: E0513 10:02:18.620177 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:19.079236 systemd[1]: Reload requested from client PID 2770 ('systemctl') (unit session-7.scope)... May 13 10:02:19.079256 systemd[1]: Reloading... May 13 10:02:19.166264 zram_generator::config[2814]: No configuration found. May 13 10:02:19.267543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:02:19.405147 systemd[1]: Reloading finished in 325 ms. May 13 10:02:19.437020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:19.455939 systemd[1]: kubelet.service: Deactivated successfully. May 13 10:02:19.456293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:19.456350 systemd[1]: kubelet.service: Consumed 835ms CPU time, 114.2M memory peak. May 13 10:02:19.458801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:19.650233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:19.659415 (kubelet)[2858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 10:02:19.722572 kubelet[2858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:02:19.722572 kubelet[2858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 10:02:19.722572 kubelet[2858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:02:19.723151 kubelet[2858]: I0513 10:02:19.722612 2858 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 10:02:19.737150 kubelet[2858]: I0513 10:02:19.737113 2858 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 10:02:19.737150 kubelet[2858]: I0513 10:02:19.737148 2858 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 10:02:19.737354 kubelet[2858]: I0513 10:02:19.737336 2858 server.go:927] "Client rotation is on, will bootstrap in background" May 13 10:02:19.738900 kubelet[2858]: I0513 10:02:19.738848 2858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 10:02:19.740443 kubelet[2858]: I0513 10:02:19.740390 2858 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:02:19.747607 kubelet[2858]: I0513 10:02:19.747563 2858 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 10:02:19.747850 kubelet[2858]: I0513 10:02:19.747801 2858 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 10:02:19.748039 kubelet[2858]: I0513 10:02:19.747836 2858 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 10:02:19.748141 kubelet[2858]: I0513 10:02:19.748041 2858 topology_manager.go:138] "Creating topology manager with none policy" May 13 10:02:19.748141 kubelet[2858]: I0513 10:02:19.748050 2858 container_manager_linux.go:301] "Creating device plugin manager" May 13 10:02:19.748141 kubelet[2858]: I0513 10:02:19.748093 2858 state_mem.go:36] "Initialized new in-memory state store" May 13 10:02:19.748324 kubelet[2858]: I0513 10:02:19.748199 2858 kubelet.go:400] "Attempting to sync node with API server" May 13 10:02:19.748324 kubelet[2858]: I0513 10:02:19.748210 2858 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 10:02:19.748324 kubelet[2858]: I0513 10:02:19.748229 2858 kubelet.go:312] "Adding apiserver pod source" May 13 10:02:19.748324 kubelet[2858]: I0513 10:02:19.748247 2858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 10:02:19.752666 kubelet[2858]: I0513 10:02:19.752628 2858 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 10:02:19.752863 kubelet[2858]: I0513 10:02:19.752839 2858 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 10:02:19.753339 kubelet[2858]: I0513 10:02:19.753317 2858 server.go:1264] "Started kubelet" May 13 10:02:19.754926 kubelet[2858]: I0513 10:02:19.754899 2858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 10:02:19.755212 kubelet[2858]: I0513 10:02:19.755095 2858 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 10:02:19.756569 kubelet[2858]: I0513 10:02:19.756490 2858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 10:02:19.756905 kubelet[2858]: I0513 10:02:19.756693 2858 server.go:455] "Adding debug handlers to kubelet server" May 13 10:02:19.756905 kubelet[2858]: I0513 10:02:19.756826 2858 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 10:02:19.757751 kubelet[2858]: I0513 10:02:19.757718 2858 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 10:02:19.758249 kubelet[2858]: I0513 10:02:19.758221 2858 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 10:02:19.760519 kubelet[2858]: I0513 10:02:19.760487 2858 reconciler.go:26] "Reconciler: start to sync state" May 13 10:02:19.762364 kubelet[2858]: I0513 10:02:19.762323 2858 factory.go:221] Registration of the systemd container factory successfully May 13 10:02:19.762450 kubelet[2858]: I0513 10:02:19.762418 2858 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 10:02:19.766574 kubelet[2858]: E0513 10:02:19.766526 2858 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 10:02:19.767364 kubelet[2858]: I0513 10:02:19.767324 2858 factory.go:221] Registration of the containerd container factory successfully May 13 10:02:19.773459 kubelet[2858]: I0513 10:02:19.773400 2858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 10:02:19.775045 kubelet[2858]: I0513 10:02:19.775008 2858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 10:02:19.775045 kubelet[2858]: I0513 10:02:19.775044 2858 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 10:02:19.775149 kubelet[2858]: I0513 10:02:19.775080 2858 kubelet.go:2337] "Starting kubelet main sync loop" May 13 10:02:19.775149 kubelet[2858]: E0513 10:02:19.775129 2858 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 10:02:19.829613 kubelet[2858]: I0513 10:02:19.829571 2858 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 10:02:19.829613 kubelet[2858]: I0513 10:02:19.829617 2858 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 10:02:19.829779 kubelet[2858]: I0513 10:02:19.829644 2858 state_mem.go:36] "Initialized new in-memory state store" May 13 10:02:19.829852 kubelet[2858]: I0513 10:02:19.829830 2858 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 10:02:19.829918 kubelet[2858]: I0513 10:02:19.829851 2858 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 10:02:19.829918 kubelet[2858]: I0513 10:02:19.829903 2858 policy_none.go:49] "None policy: Start" May 13 10:02:19.830680 kubelet[2858]: I0513 10:02:19.830660 2858 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 10:02:19.830730 kubelet[2858]: I0513 10:02:19.830685 2858 state_mem.go:35] "Initializing new in-memory state store" May 13 10:02:19.830857 kubelet[2858]: I0513 10:02:19.830840 2858 state_mem.go:75] "Updated machine memory state" May 13 10:02:19.835803 kubelet[2858]: I0513 10:02:19.835710 2858 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 10:02:19.836003 kubelet[2858]: I0513 10:02:19.835961 2858 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 10:02:19.836110 kubelet[2858]: I0513 10:02:19.836087 2858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 10:02:19.863690 kubelet[2858]: I0513 10:02:19.863647 2858 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 10:02:19.870727 kubelet[2858]: I0513 10:02:19.870696 2858 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 10:02:19.870905 kubelet[2858]: I0513 10:02:19.870768 2858 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 10:02:19.876206 kubelet[2858]: I0513 10:02:19.876129 2858 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 10:02:19.876364 kubelet[2858]: I0513 10:02:19.876298 2858 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 10:02:19.876364 kubelet[2858]: I0513 10:02:19.876362 2858 topology_manager.go:215] "Topology Admit Handler" podUID="8e648910b66e4ee959fa3978e33a51f8" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 10:02:19.883653 kubelet[2858]: E0513 10:02:19.883584 2858 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 10:02:19.962639 kubelet[2858]: I0513 10:02:19.962319 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:19.962639 kubelet[2858]: I0513 10:02:19.962382 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:19.962639 kubelet[2858]: I0513 10:02:19.962407 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 10:02:19.962639 kubelet[2858]: I0513 10:02:19.962424 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e648910b66e4ee959fa3978e33a51f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e648910b66e4ee959fa3978e33a51f8\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:19.962639 kubelet[2858]: I0513 10:02:19.962446 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:19.962853 kubelet[2858]: I0513 10:02:19.962489 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:19.962853 kubelet[2858]: I0513 10:02:19.962508 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:19.962853 kubelet[2858]: I0513 10:02:19.962529 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e648910b66e4ee959fa3978e33a51f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e648910b66e4ee959fa3978e33a51f8\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:19.962853 kubelet[2858]: I0513 10:02:19.962550 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e648910b66e4ee959fa3978e33a51f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e648910b66e4ee959fa3978e33a51f8\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:20.184178 kubelet[2858]: E0513 10:02:20.183963 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:20.184633 kubelet[2858]: E0513 10:02:20.184586 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:20.184713 kubelet[2858]: E0513 10:02:20.184680 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:20.749715 kubelet[2858]: I0513 10:02:20.749679 2858 apiserver.go:52] "Watching apiserver" May 13 10:02:20.759099 kubelet[2858]: I0513 10:02:20.759058 2858 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 10:02:20.807908 kubelet[2858]: E0513 10:02:20.807613 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:20.810326 kubelet[2858]: E0513 10:02:20.810286 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:20.963023 kubelet[2858]: E0513 10:02:20.962970 2858 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 10:02:20.963461 kubelet[2858]: E0513 10:02:20.963427 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:21.014781 kubelet[2858]: I0513 10:02:21.014552 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.0145175 podStartE2EDuration="3.0145175s" podCreationTimestamp="2025-05-13 10:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:21.014294468 +0000 UTC m=+1.350331714" watchObservedRunningTime="2025-05-13 10:02:21.0145175 +0000 UTC m=+1.350554746" May 13 10:02:21.014781 kubelet[2858]: I0513 10:02:21.014650 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.014643858 podStartE2EDuration="2.014643858s" podCreationTimestamp="2025-05-13 10:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:20.843803075 +0000 UTC m=+1.179840321" watchObservedRunningTime="2025-05-13 10:02:21.014643858 +0000 UTC m=+1.350681104" May 13 10:02:21.031702 kubelet[2858]: I0513 10:02:21.031626 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.031355777 podStartE2EDuration="2.031355777s" podCreationTimestamp="2025-05-13 10:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:21.020230234 +0000 UTC m=+1.356267490" watchObservedRunningTime="2025-05-13 10:02:21.031355777 +0000 UTC m=+1.367393023" May 13 10:02:21.809183 kubelet[2858]: E0513 10:02:21.809133 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:21.809611 kubelet[2858]: E0513 10:02:21.809266 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:22.810313 kubelet[2858]: E0513 10:02:22.810253 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:25.198651 sudo[1805]: pam_unix(sudo:session): session closed for user root May 13 10:02:25.200155 sshd[1804]: Connection closed by 10.0.0.1 port 57840 May 13 10:02:25.200713 sshd-session[1802]: pam_unix(sshd:session): session closed for user core May 13 10:02:25.205049 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:57840.service: Deactivated successfully. May 13 10:02:25.207358 systemd[1]: session-7.scope: Deactivated successfully. May 13 10:02:25.207604 systemd[1]: session-7.scope: Consumed 5.555s CPU time, 238.6M memory peak. May 13 10:02:25.208895 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. May 13 10:02:25.210273 systemd-logind[1575]: Removed session 7. May 13 10:02:25.544095 kubelet[2858]: E0513 10:02:25.543976 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:25.817214 kubelet[2858]: E0513 10:02:25.817077 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:29.411052 kubelet[2858]: E0513 10:02:29.411021 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:29.584716 update_engine[1579]: I20250513 10:02:29.584625 1579 update_attempter.cc:509] Updating boot flags... May 13 10:02:29.823260 kubelet[2858]: E0513 10:02:29.823131 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:30.984997 kubelet[2858]: E0513 10:02:30.984854 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:32.994526 kubelet[2858]: I0513 10:02:32.994480 2858 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 10:02:32.995059 kubelet[2858]: I0513 10:02:32.995045 2858 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 10:02:32.995099 containerd[1591]: time="2025-05-13T10:02:32.994847683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 10:02:33.697466 kubelet[2858]: I0513 10:02:33.697380 2858 topology_manager.go:215] "Topology Admit Handler" podUID="44f50ac2-93e9-4d66-adb5-6a1e284e24f2" podNamespace="kube-system" podName="kube-proxy-mqncv" May 13 10:02:33.711098 systemd[1]: Created slice kubepods-besteffort-pod44f50ac2_93e9_4d66_adb5_6a1e284e24f2.slice - libcontainer container kubepods-besteffort-pod44f50ac2_93e9_4d66_adb5_6a1e284e24f2.slice. May 13 10:02:33.736393 kubelet[2858]: I0513 10:02:33.736339 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44f50ac2-93e9-4d66-adb5-6a1e284e24f2-kube-proxy\") pod \"kube-proxy-mqncv\" (UID: \"44f50ac2-93e9-4d66-adb5-6a1e284e24f2\") " pod="kube-system/kube-proxy-mqncv" May 13 10:02:33.736559 kubelet[2858]: I0513 10:02:33.736407 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44f50ac2-93e9-4d66-adb5-6a1e284e24f2-xtables-lock\") pod \"kube-proxy-mqncv\" (UID: \"44f50ac2-93e9-4d66-adb5-6a1e284e24f2\") " pod="kube-system/kube-proxy-mqncv" May 13 10:02:33.736559 kubelet[2858]: I0513 10:02:33.736427 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44f50ac2-93e9-4d66-adb5-6a1e284e24f2-lib-modules\") pod \"kube-proxy-mqncv\" (UID: \"44f50ac2-93e9-4d66-adb5-6a1e284e24f2\") " pod="kube-system/kube-proxy-mqncv" May 13 10:02:33.736559 kubelet[2858]: I0513 10:02:33.736443 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6csh\" (UniqueName: \"kubernetes.io/projected/44f50ac2-93e9-4d66-adb5-6a1e284e24f2-kube-api-access-w6csh\") pod \"kube-proxy-mqncv\" (UID: \"44f50ac2-93e9-4d66-adb5-6a1e284e24f2\") " pod="kube-system/kube-proxy-mqncv" May 13 10:02:33.790907 kubelet[2858]: I0513 10:02:33.790811 2858 topology_manager.go:215] "Topology Admit Handler" podUID="79a84a3d-81d0-464f-b439-f2088cc1a03f" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-c2tp9" May 13 10:02:33.798781 systemd[1]: Created slice kubepods-besteffort-pod79a84a3d_81d0_464f_b439_f2088cc1a03f.slice - libcontainer container kubepods-besteffort-pod79a84a3d_81d0_464f_b439_f2088cc1a03f.slice. May 13 10:02:33.837043 kubelet[2858]: I0513 10:02:33.836928 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/79a84a3d-81d0-464f-b439-f2088cc1a03f-var-lib-calico\") pod \"tigera-operator-797db67f8-c2tp9\" (UID: \"79a84a3d-81d0-464f-b439-f2088cc1a03f\") " pod="tigera-operator/tigera-operator-797db67f8-c2tp9" May 13 10:02:33.837043 kubelet[2858]: I0513 10:02:33.837010 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mftc\" (UniqueName: \"kubernetes.io/projected/79a84a3d-81d0-464f-b439-f2088cc1a03f-kube-api-access-7mftc\") pod \"tigera-operator-797db67f8-c2tp9\" (UID: \"79a84a3d-81d0-464f-b439-f2088cc1a03f\") " pod="tigera-operator/tigera-operator-797db67f8-c2tp9" May 13 10:02:34.021861 kubelet[2858]: E0513 10:02:34.021666 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:34.023076 containerd[1591]: time="2025-05-13T10:02:34.022919640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqncv,Uid:44f50ac2-93e9-4d66-adb5-6a1e284e24f2,Namespace:kube-system,Attempt:0,}" May 13 10:02:34.102861 containerd[1591]: time="2025-05-13T10:02:34.102806362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-c2tp9,Uid:79a84a3d-81d0-464f-b439-f2088cc1a03f,Namespace:tigera-operator,Attempt:0,}" May 13 10:02:34.109245 containerd[1591]: time="2025-05-13T10:02:34.109195461Z" level=info msg="connecting to shim ba531d7dd26769221d4fce4d9a6ededc3b7c596258d73d733ffffee5544124a5" address="unix:///run/containerd/s/f7a47f804db6e77f3447588406a9f6e6b65ac79db5c84850c18a17b776624c3c" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:34.161019 systemd[1]: Started cri-containerd-ba531d7dd26769221d4fce4d9a6ededc3b7c596258d73d733ffffee5544124a5.scope - libcontainer container ba531d7dd26769221d4fce4d9a6ededc3b7c596258d73d733ffffee5544124a5. May 13 10:02:34.234697 containerd[1591]: time="2025-05-13T10:02:34.234637103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqncv,Uid:44f50ac2-93e9-4d66-adb5-6a1e284e24f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba531d7dd26769221d4fce4d9a6ededc3b7c596258d73d733ffffee5544124a5\"" May 13 10:02:34.235404 kubelet[2858]: E0513 10:02:34.235375 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:34.237649 containerd[1591]: time="2025-05-13T10:02:34.237601434Z" level=info msg="CreateContainer within sandbox \"ba531d7dd26769221d4fce4d9a6ededc3b7c596258d73d733ffffee5544124a5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 10:02:34.306515 containerd[1591]: time="2025-05-13T10:02:34.306336219Z" level=info msg="Container 50300b62905dd109492dd4c3b2ffb34a1684b6d551f5b19db6f4c1a8ce7797c6: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:34.319249 containerd[1591]: time="2025-05-13T10:02:34.319192484Z" level=info msg="CreateContainer within sandbox \"ba531d7dd26769221d4fce4d9a6ededc3b7c596258d73d733ffffee5544124a5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50300b62905dd109492dd4c3b2ffb34a1684b6d551f5b19db6f4c1a8ce7797c6\"" May 13 10:02:34.320357 containerd[1591]: time="2025-05-13T10:02:34.320319496Z" level=info msg="StartContainer for \"50300b62905dd109492dd4c3b2ffb34a1684b6d551f5b19db6f4c1a8ce7797c6\"" May 13 10:02:34.321829 containerd[1591]: time="2025-05-13T10:02:34.321788812Z" level=info msg="connecting to shim 50300b62905dd109492dd4c3b2ffb34a1684b6d551f5b19db6f4c1a8ce7797c6" address="unix:///run/containerd/s/f7a47f804db6e77f3447588406a9f6e6b65ac79db5c84850c18a17b776624c3c" protocol=ttrpc version=3 May 13 10:02:34.323635 containerd[1591]: time="2025-05-13T10:02:34.323583561Z" level=info msg="connecting to shim 5a79fcf261e59eb58e1a7b1fb4a0f70a6d099d8a2eb985a94071cdb13de614c3" address="unix:///run/containerd/s/916a455de203dd07c86b6a5c0224ff3c5ab500d6825171b6f4924bec22183c98" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:34.346026 systemd[1]: Started cri-containerd-50300b62905dd109492dd4c3b2ffb34a1684b6d551f5b19db6f4c1a8ce7797c6.scope - libcontainer container 50300b62905dd109492dd4c3b2ffb34a1684b6d551f5b19db6f4c1a8ce7797c6. May 13 10:02:34.356701 systemd[1]: Started cri-containerd-5a79fcf261e59eb58e1a7b1fb4a0f70a6d099d8a2eb985a94071cdb13de614c3.scope - libcontainer container 5a79fcf261e59eb58e1a7b1fb4a0f70a6d099d8a2eb985a94071cdb13de614c3. May 13 10:02:34.405765 containerd[1591]: time="2025-05-13T10:02:34.405705020Z" level=info msg="StartContainer for \"50300b62905dd109492dd4c3b2ffb34a1684b6d551f5b19db6f4c1a8ce7797c6\" returns successfully" May 13 10:02:34.413781 containerd[1591]: time="2025-05-13T10:02:34.413722916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-c2tp9,Uid:79a84a3d-81d0-464f-b439-f2088cc1a03f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5a79fcf261e59eb58e1a7b1fb4a0f70a6d099d8a2eb985a94071cdb13de614c3\"" May 13 10:02:34.420143 containerd[1591]: time="2025-05-13T10:02:34.419977773Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 10:02:34.834523 kubelet[2858]: E0513 10:02:34.834491 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:34.851231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644998798.mount: Deactivated successfully. May 13 10:02:36.345230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021256898.mount: Deactivated successfully. May 13 10:02:36.657656 containerd[1591]: time="2025-05-13T10:02:36.657505647Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:36.658694 containerd[1591]: time="2025-05-13T10:02:36.658653989Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 10:02:36.660402 containerd[1591]: time="2025-05-13T10:02:36.660336917Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:36.662567 containerd[1591]: time="2025-05-13T10:02:36.662526979Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:36.663344 containerd[1591]: time="2025-05-13T10:02:36.663304723Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.243290211s" May 13 10:02:36.663397 containerd[1591]: time="2025-05-13T10:02:36.663344958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 10:02:36.665693 containerd[1591]: time="2025-05-13T10:02:36.665298465Z" level=info msg="CreateContainer within sandbox \"5a79fcf261e59eb58e1a7b1fb4a0f70a6d099d8a2eb985a94071cdb13de614c3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 10:02:36.672110 containerd[1591]: time="2025-05-13T10:02:36.672067166Z" level=info msg="Container f5ac38fa4667bfa9ade32d2b7dc8488c23890764e717b8691abe77872c8afca8: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:36.678995 containerd[1591]: time="2025-05-13T10:02:36.678949328Z" level=info msg="CreateContainer within sandbox \"5a79fcf261e59eb58e1a7b1fb4a0f70a6d099d8a2eb985a94071cdb13de614c3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f5ac38fa4667bfa9ade32d2b7dc8488c23890764e717b8691abe77872c8afca8\"" May 13 10:02:36.680041 containerd[1591]: time="2025-05-13T10:02:36.679415736Z" level=info msg="StartContainer for \"f5ac38fa4667bfa9ade32d2b7dc8488c23890764e717b8691abe77872c8afca8\"" May 13 10:02:36.680404 containerd[1591]: time="2025-05-13T10:02:36.680376976Z" level=info msg="connecting to shim f5ac38fa4667bfa9ade32d2b7dc8488c23890764e717b8691abe77872c8afca8" address="unix:///run/containerd/s/916a455de203dd07c86b6a5c0224ff3c5ab500d6825171b6f4924bec22183c98" protocol=ttrpc version=3 May 13 10:02:36.702031 systemd[1]: Started cri-containerd-f5ac38fa4667bfa9ade32d2b7dc8488c23890764e717b8691abe77872c8afca8.scope - libcontainer container f5ac38fa4667bfa9ade32d2b7dc8488c23890764e717b8691abe77872c8afca8. May 13 10:02:36.733498 containerd[1591]: time="2025-05-13T10:02:36.733439178Z" level=info msg="StartContainer for \"f5ac38fa4667bfa9ade32d2b7dc8488c23890764e717b8691abe77872c8afca8\" returns successfully" May 13 10:02:36.847671 kubelet[2858]: I0513 10:02:36.847580 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mqncv" podStartSLOduration=3.847562069 podStartE2EDuration="3.847562069s" podCreationTimestamp="2025-05-13 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:34.875398421 +0000 UTC m=+15.211435667" watchObservedRunningTime="2025-05-13 10:02:36.847562069 +0000 UTC m=+17.183599315" May 13 10:02:39.571927 kubelet[2858]: I0513 10:02:39.571802 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-c2tp9" podStartSLOduration=4.32475771 podStartE2EDuration="6.571780678s" podCreationTimestamp="2025-05-13 10:02:33 +0000 UTC" firstStartedPulling="2025-05-13 10:02:34.417088493 +0000 UTC m=+14.753125739" lastFinishedPulling="2025-05-13 10:02:36.664111461 +0000 UTC m=+17.000148707" observedRunningTime="2025-05-13 10:02:36.847862665 +0000 UTC m=+17.183899911" watchObservedRunningTime="2025-05-13 10:02:39.571780678 +0000 UTC m=+19.907817915" May 13 10:02:39.572528 kubelet[2858]: I0513 10:02:39.572088 2858 topology_manager.go:215] "Topology Admit Handler" podUID="a9aa096b-163b-4a91-9848-52c0b1ba21f7" podNamespace="calico-system" podName="calico-typha-57f7bdc654-z8q4f" May 13 10:02:39.586664 systemd[1]: Created slice kubepods-besteffort-poda9aa096b_163b_4a91_9848_52c0b1ba21f7.slice - libcontainer container kubepods-besteffort-poda9aa096b_163b_4a91_9848_52c0b1ba21f7.slice. May 13 10:02:39.633148 kubelet[2858]: I0513 10:02:39.632757 2858 topology_manager.go:215] "Topology Admit Handler" podUID="b12fd9e0-93b3-4849-973a-0d7d1aa81436" podNamespace="calico-system" podName="calico-node-5qpth" May 13 10:02:39.642231 systemd[1]: Created slice kubepods-besteffort-podb12fd9e0_93b3_4849_973a_0d7d1aa81436.slice - libcontainer container kubepods-besteffort-podb12fd9e0_93b3_4849_973a_0d7d1aa81436.slice. May 13 10:02:39.672771 kubelet[2858]: I0513 10:02:39.672727 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-flexvol-driver-host\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.672771 kubelet[2858]: I0513 10:02:39.672766 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-xtables-lock\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.672771 kubelet[2858]: I0513 10:02:39.672782 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b12fd9e0-93b3-4849-973a-0d7d1aa81436-tigera-ca-bundle\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.672771 kubelet[2858]: I0513 10:02:39.672798 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-cni-net-dir\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.672771 kubelet[2858]: I0513 10:02:39.672814 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shgdr\" (UniqueName: \"kubernetes.io/projected/b12fd9e0-93b3-4849-973a-0d7d1aa81436-kube-api-access-shgdr\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.673302 kubelet[2858]: I0513 10:02:39.672831 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-cni-bin-dir\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.673302 kubelet[2858]: I0513 10:02:39.672891 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9aa096b-163b-4a91-9848-52c0b1ba21f7-tigera-ca-bundle\") pod \"calico-typha-57f7bdc654-z8q4f\" (UID: \"a9aa096b-163b-4a91-9848-52c0b1ba21f7\") " pod="calico-system/calico-typha-57f7bdc654-z8q4f" May 13 10:02:39.673302 kubelet[2858]: I0513 10:02:39.672922 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzs6\" (UniqueName: \"kubernetes.io/projected/a9aa096b-163b-4a91-9848-52c0b1ba21f7-kube-api-access-2jzs6\") pod \"calico-typha-57f7bdc654-z8q4f\" (UID: \"a9aa096b-163b-4a91-9848-52c0b1ba21f7\") " pod="calico-system/calico-typha-57f7bdc654-z8q4f" May 13 10:02:39.673302 kubelet[2858]: I0513 10:02:39.672939 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b12fd9e0-93b3-4849-973a-0d7d1aa81436-node-certs\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.673302 kubelet[2858]: I0513 10:02:39.673031 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-var-lib-calico\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.673420 kubelet[2858]: I0513 10:02:39.673089 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-policysync\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.673420 kubelet[2858]: I0513 10:02:39.673113 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-cni-log-dir\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.673420 kubelet[2858]: I0513 10:02:39.673134 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a9aa096b-163b-4a91-9848-52c0b1ba21f7-typha-certs\") pod \"calico-typha-57f7bdc654-z8q4f\" (UID: \"a9aa096b-163b-4a91-9848-52c0b1ba21f7\") " pod="calico-system/calico-typha-57f7bdc654-z8q4f" May 13 10:02:39.673420 kubelet[2858]: I0513 10:02:39.673151 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-lib-modules\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.673420 kubelet[2858]: I0513 10:02:39.673167 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b12fd9e0-93b3-4849-973a-0d7d1aa81436-var-run-calico\") pod \"calico-node-5qpth\" (UID: \"b12fd9e0-93b3-4849-973a-0d7d1aa81436\") " pod="calico-system/calico-node-5qpth" May 13 10:02:39.749443 kubelet[2858]: I0513 10:02:39.749360 2858 topology_manager.go:215] "Topology Admit Handler" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" podNamespace="calico-system" podName="csi-node-driver-thf6f" May 13 10:02:39.750031 kubelet[2858]: E0513 10:02:39.750011 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:39.774434 kubelet[2858]: I0513 10:02:39.774356 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7ce333ae-3ee7-43d5-a75e-02f0517a7db5-socket-dir\") pod \"csi-node-driver-thf6f\" (UID: \"7ce333ae-3ee7-43d5-a75e-02f0517a7db5\") " pod="calico-system/csi-node-driver-thf6f" May 13 10:02:39.774434 kubelet[2858]: I0513 10:02:39.774409 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7ce333ae-3ee7-43d5-a75e-02f0517a7db5-registration-dir\") pod \"csi-node-driver-thf6f\" (UID: \"7ce333ae-3ee7-43d5-a75e-02f0517a7db5\") " pod="calico-system/csi-node-driver-thf6f" May 13 10:02:39.774666 kubelet[2858]: I0513 10:02:39.774581 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ce333ae-3ee7-43d5-a75e-02f0517a7db5-kubelet-dir\") pod \"csi-node-driver-thf6f\" (UID: \"7ce333ae-3ee7-43d5-a75e-02f0517a7db5\") " pod="calico-system/csi-node-driver-thf6f" May 13 10:02:39.774745 kubelet[2858]: I0513 10:02:39.774717 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bhks\" (UniqueName: \"kubernetes.io/projected/7ce333ae-3ee7-43d5-a75e-02f0517a7db5-kube-api-access-7bhks\") pod \"csi-node-driver-thf6f\" (UID: \"7ce333ae-3ee7-43d5-a75e-02f0517a7db5\") " pod="calico-system/csi-node-driver-thf6f" May 13 10:02:39.774777 kubelet[2858]: I0513 10:02:39.774755 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7ce333ae-3ee7-43d5-a75e-02f0517a7db5-varrun\") pod \"csi-node-driver-thf6f\" (UID: \"7ce333ae-3ee7-43d5-a75e-02f0517a7db5\") " pod="calico-system/csi-node-driver-thf6f" May 13 10:02:39.781718 kubelet[2858]: E0513 10:02:39.781663 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.781718 kubelet[2858]: W0513 10:02:39.781699 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.783852 kubelet[2858]: E0513 10:02:39.781734 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.783852 kubelet[2858]: E0513 10:02:39.782315 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.783852 kubelet[2858]: W0513 10:02:39.782325 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.783852 kubelet[2858]: E0513 10:02:39.782505 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.783852 kubelet[2858]: E0513 10:02:39.782661 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.783852 kubelet[2858]: W0513 10:02:39.782671 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.783852 kubelet[2858]: E0513 10:02:39.782715 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.783852 kubelet[2858]: E0513 10:02:39.782977 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.783852 kubelet[2858]: W0513 10:02:39.782987 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.783852 kubelet[2858]: E0513 10:02:39.783073 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.784133 kubelet[2858]: E0513 10:02:39.783230 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.784133 kubelet[2858]: W0513 10:02:39.783238 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.784133 kubelet[2858]: E0513 10:02:39.783960 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.784133 kubelet[2858]: W0513 10:02:39.783969 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.784219 kubelet[2858]: E0513 10:02:39.784143 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.784219 kubelet[2858]: E0513 10:02:39.784159 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.784778 kubelet[2858]: E0513 10:02:39.784752 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.784778 kubelet[2858]: W0513 10:02:39.784770 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.785125 kubelet[2858]: E0513 10:02:39.785037 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.785679 kubelet[2858]: E0513 10:02:39.785332 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.785679 kubelet[2858]: W0513 10:02:39.785347 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.785972 kubelet[2858]: E0513 10:02:39.785824 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.786671 kubelet[2858]: E0513 10:02:39.786646 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.786745 kubelet[2858]: W0513 10:02:39.786728 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.787087 kubelet[2858]: E0513 10:02:39.787010 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.787087 kubelet[2858]: W0513 10:02:39.787021 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.787235 kubelet[2858]: E0513 10:02:39.787221 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.787288 kubelet[2858]: W0513 10:02:39.787277 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.787551 kubelet[2858]: E0513 10:02:39.787471 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.787551 kubelet[2858]: W0513 10:02:39.787481 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.788009 kubelet[2858]: E0513 10:02:39.787968 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.788039 kubelet[2858]: E0513 10:02:39.788014 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.788039 kubelet[2858]: E0513 10:02:39.788032 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.788122 kubelet[2858]: E0513 10:02:39.788092 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.788365 kubelet[2858]: E0513 10:02:39.788295 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.788365 kubelet[2858]: W0513 10:02:39.788310 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.788476 kubelet[2858]: E0513 10:02:39.788459 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.789638 kubelet[2858]: E0513 10:02:39.789606 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.789638 kubelet[2858]: W0513 10:02:39.789620 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.789888 kubelet[2858]: E0513 10:02:39.789856 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.790095 kubelet[2858]: E0513 10:02:39.790082 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.790156 kubelet[2858]: W0513 10:02:39.790145 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.790273 kubelet[2858]: E0513 10:02:39.790261 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.790800 kubelet[2858]: E0513 10:02:39.790784 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.790939 kubelet[2858]: W0513 10:02:39.790923 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.791654 kubelet[2858]: E0513 10:02:39.791217 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.791932 kubelet[2858]: E0513 10:02:39.791916 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.793894 kubelet[2858]: W0513 10:02:39.793035 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.794060 kubelet[2858]: E0513 10:02:39.794045 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.794413 kubelet[2858]: E0513 10:02:39.794400 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.794480 kubelet[2858]: W0513 10:02:39.794469 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.794626 kubelet[2858]: E0513 10:02:39.794567 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.796020 kubelet[2858]: E0513 10:02:39.796005 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.796105 kubelet[2858]: W0513 10:02:39.796090 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.796244 kubelet[2858]: E0513 10:02:39.796211 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.796453 kubelet[2858]: E0513 10:02:39.796362 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.796779 kubelet[2858]: W0513 10:02:39.796763 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.796959 kubelet[2858]: E0513 10:02:39.796919 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.797128 kubelet[2858]: E0513 10:02:39.797109 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.797197 kubelet[2858]: W0513 10:02:39.797185 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.797340 kubelet[2858]: E0513 10:02:39.797326 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.797455 kubelet[2858]: E0513 10:02:39.797444 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.797502 kubelet[2858]: W0513 10:02:39.797492 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.797651 kubelet[2858]: E0513 10:02:39.797622 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.798587 kubelet[2858]: E0513 10:02:39.797863 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.798666 kubelet[2858]: W0513 10:02:39.798650 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.798830 kubelet[2858]: E0513 10:02:39.798817 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.799714 kubelet[2858]: E0513 10:02:39.798928 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.799800 kubelet[2858]: W0513 10:02:39.799787 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.799960 kubelet[2858]: E0513 10:02:39.799947 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.802560 kubelet[2858]: E0513 10:02:39.802530 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.802661 kubelet[2858]: W0513 10:02:39.802644 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.802910 kubelet[2858]: E0513 10:02:39.802850 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.803171 kubelet[2858]: E0513 10:02:39.803066 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.803171 kubelet[2858]: W0513 10:02:39.803099 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.803648 kubelet[2858]: E0513 10:02:39.803507 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.806677 kubelet[2858]: E0513 10:02:39.806642 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.806677 kubelet[2858]: W0513 10:02:39.806668 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.806775 kubelet[2858]: E0513 10:02:39.806750 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.811428 kubelet[2858]: E0513 10:02:39.811408 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.811518 kubelet[2858]: W0513 10:02:39.811505 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.811663 kubelet[2858]: E0513 10:02:39.811648 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.812960 kubelet[2858]: E0513 10:02:39.812947 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.813375 kubelet[2858]: W0513 10:02:39.813358 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.813514 kubelet[2858]: E0513 10:02:39.813502 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.814467 kubelet[2858]: E0513 10:02:39.814453 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.814527 kubelet[2858]: W0513 10:02:39.814516 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.814697 kubelet[2858]: E0513 10:02:39.814651 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.814993 kubelet[2858]: E0513 10:02:39.814982 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.815114 kubelet[2858]: W0513 10:02:39.815050 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.815166 kubelet[2858]: E0513 10:02:39.815155 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.815444 kubelet[2858]: E0513 10:02:39.815421 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.815444 kubelet[2858]: W0513 10:02:39.815431 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.815617 kubelet[2858]: E0513 10:02:39.815593 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.815903 kubelet[2858]: E0513 10:02:39.815862 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.815903 kubelet[2858]: W0513 10:02:39.815889 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.816074 kubelet[2858]: E0513 10:02:39.816049 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.816388 kubelet[2858]: E0513 10:02:39.816362 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.816388 kubelet[2858]: W0513 10:02:39.816374 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.816619 kubelet[2858]: E0513 10:02:39.816559 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.816978 kubelet[2858]: E0513 10:02:39.816865 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.816978 kubelet[2858]: W0513 10:02:39.816893 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.817094 kubelet[2858]: E0513 10:02:39.817080 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.817299 kubelet[2858]: E0513 10:02:39.817226 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.817299 kubelet[2858]: W0513 10:02:39.817236 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.817299 kubelet[2858]: E0513 10:02:39.817244 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.818442 kubelet[2858]: E0513 10:02:39.818399 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.818442 kubelet[2858]: W0513 10:02:39.818410 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.818442 kubelet[2858]: E0513 10:02:39.818419 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.876143 kubelet[2858]: E0513 10:02:39.876012 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.876143 kubelet[2858]: W0513 10:02:39.876046 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.876143 kubelet[2858]: E0513 10:02:39.876070 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.876479 kubelet[2858]: E0513 10:02:39.876436 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.876521 kubelet[2858]: W0513 10:02:39.876477 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.876558 kubelet[2858]: E0513 10:02:39.876514 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.876855 kubelet[2858]: E0513 10:02:39.876827 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.876855 kubelet[2858]: W0513 10:02:39.876848 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.876949 kubelet[2858]: E0513 10:02:39.876898 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.877177 kubelet[2858]: E0513 10:02:39.877145 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.877177 kubelet[2858]: W0513 10:02:39.877162 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.877177 kubelet[2858]: E0513 10:02:39.877186 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.877456 kubelet[2858]: E0513 10:02:39.877396 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.877456 kubelet[2858]: W0513 10:02:39.877447 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.877579 kubelet[2858]: E0513 10:02:39.877470 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.877790 kubelet[2858]: E0513 10:02:39.877774 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.877904 kubelet[2858]: W0513 10:02:39.877853 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.878008 kubelet[2858]: E0513 10:02:39.877986 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.878280 kubelet[2858]: E0513 10:02:39.878246 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.878280 kubelet[2858]: W0513 10:02:39.878264 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.878280 kubelet[2858]: E0513 10:02:39.878284 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.878593 kubelet[2858]: E0513 10:02:39.878565 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.878655 kubelet[2858]: W0513 10:02:39.878597 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.878820 kubelet[2858]: E0513 10:02:39.878799 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.878820 kubelet[2858]: W0513 10:02:39.878814 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.878949 kubelet[2858]: E0513 10:02:39.878930 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.879152 kubelet[2858]: E0513 10:02:39.879115 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.879152 kubelet[2858]: W0513 10:02:39.879127 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.879216 kubelet[2858]: E0513 10:02:39.879161 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.879274 kubelet[2858]: E0513 10:02:39.878805 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.879439 kubelet[2858]: E0513 10:02:39.879402 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.879439 kubelet[2858]: W0513 10:02:39.879423 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.879526 kubelet[2858]: E0513 10:02:39.879446 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.879661 kubelet[2858]: E0513 10:02:39.879645 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.879661 kubelet[2858]: W0513 10:02:39.879657 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.879729 kubelet[2858]: E0513 10:02:39.879673 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.879967 kubelet[2858]: E0513 10:02:39.879936 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.879967 kubelet[2858]: W0513 10:02:39.879951 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.879967 kubelet[2858]: E0513 10:02:39.879973 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.880488 kubelet[2858]: E0513 10:02:39.880446 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.880488 kubelet[2858]: W0513 10:02:39.880465 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.880488 kubelet[2858]: E0513 10:02:39.880478 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.880661 kubelet[2858]: E0513 10:02:39.880644 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.880661 kubelet[2858]: W0513 10:02:39.880658 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.880748 kubelet[2858]: E0513 10:02:39.880665 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.880844 kubelet[2858]: E0513 10:02:39.880826 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.880844 kubelet[2858]: W0513 10:02:39.880839 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.880940 kubelet[2858]: E0513 10:02:39.880847 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.881052 kubelet[2858]: E0513 10:02:39.881031 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.881052 kubelet[2858]: W0513 10:02:39.881046 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.881132 kubelet[2858]: E0513 10:02:39.881075 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.881351 kubelet[2858]: E0513 10:02:39.881320 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.881351 kubelet[2858]: W0513 10:02:39.881341 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.882121 kubelet[2858]: E0513 10:02:39.882004 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.882121 kubelet[2858]: W0513 10:02:39.882018 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.882121 kubelet[2858]: E0513 10:02:39.882028 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.882121 kubelet[2858]: E0513 10:02:39.882037 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.882337 kubelet[2858]: E0513 10:02:39.882316 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.882393 kubelet[2858]: W0513 10:02:39.882382 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.882459 kubelet[2858]: E0513 10:02:39.882447 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.882794 kubelet[2858]: E0513 10:02:39.882773 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.882836 kubelet[2858]: W0513 10:02:39.882792 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.882836 kubelet[2858]: E0513 10:02:39.882814 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.883031 kubelet[2858]: E0513 10:02:39.883015 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.883031 kubelet[2858]: W0513 10:02:39.883026 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.883087 kubelet[2858]: E0513 10:02:39.883041 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.883382 kubelet[2858]: E0513 10:02:39.883359 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.883382 kubelet[2858]: W0513 10:02:39.883374 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.883446 kubelet[2858]: E0513 10:02:39.883390 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.883625 kubelet[2858]: E0513 10:02:39.883608 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.883625 kubelet[2858]: W0513 10:02:39.883621 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.883675 kubelet[2858]: E0513 10:02:39.883636 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.883835 kubelet[2858]: E0513 10:02:39.883820 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.883835 kubelet[2858]: W0513 10:02:39.883831 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.883934 kubelet[2858]: E0513 10:02:39.883839 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.888228 kubelet[2858]: E0513 10:02:39.888205 2858 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:02:39.888228 kubelet[2858]: W0513 10:02:39.888219 2858 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:02:39.888228 kubelet[2858]: E0513 10:02:39.888228 2858 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:02:39.891418 kubelet[2858]: E0513 10:02:39.891375 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:39.891972 containerd[1591]: time="2025-05-13T10:02:39.891930262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57f7bdc654-z8q4f,Uid:a9aa096b-163b-4a91-9848-52c0b1ba21f7,Namespace:calico-system,Attempt:0,}" May 13 10:02:39.945819 kubelet[2858]: E0513 10:02:39.945774 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:39.946399 containerd[1591]: time="2025-05-13T10:02:39.946345107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5qpth,Uid:b12fd9e0-93b3-4849-973a-0d7d1aa81436,Namespace:calico-system,Attempt:0,}" May 13 10:02:40.017918 containerd[1591]: time="2025-05-13T10:02:40.017437365Z" level=info msg="connecting to shim 7f3dac7dc6c3bb1c15e19085fa559eb56e200bd6f533f6018ab8347f42090b3d" address="unix:///run/containerd/s/422891c81edac5e8efe83689e69e1d4419b6ff4d6aa590a6bdb44f8b1e25d733" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:40.025812 containerd[1591]: time="2025-05-13T10:02:40.025755964Z" level=info msg="connecting to shim 00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5" address="unix:///run/containerd/s/e0394ab9daab6f7828cc63a63c08e8f5757a3b44a3cdaa752877004bdc9f781b" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:40.050230 systemd[1]: Started cri-containerd-7f3dac7dc6c3bb1c15e19085fa559eb56e200bd6f533f6018ab8347f42090b3d.scope - libcontainer container 7f3dac7dc6c3bb1c15e19085fa559eb56e200bd6f533f6018ab8347f42090b3d. May 13 10:02:40.055767 systemd[1]: Started cri-containerd-00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5.scope - libcontainer container 00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5. May 13 10:02:40.191212 containerd[1591]: time="2025-05-13T10:02:40.191147459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5qpth,Uid:b12fd9e0-93b3-4849-973a-0d7d1aa81436,Namespace:calico-system,Attempt:0,} returns sandbox id \"00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5\"" May 13 10:02:40.192076 kubelet[2858]: E0513 10:02:40.192042 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:40.193077 containerd[1591]: time="2025-05-13T10:02:40.193021014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 10:02:40.205487 containerd[1591]: time="2025-05-13T10:02:40.205437491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57f7bdc654-z8q4f,Uid:a9aa096b-163b-4a91-9848-52c0b1ba21f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f3dac7dc6c3bb1c15e19085fa559eb56e200bd6f533f6018ab8347f42090b3d\"" May 13 10:02:40.206432 kubelet[2858]: E0513 10:02:40.206399 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:41.776548 kubelet[2858]: E0513 10:02:41.776438 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:42.226067 containerd[1591]: time="2025-05-13T10:02:42.225995365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:42.226850 containerd[1591]: time="2025-05-13T10:02:42.226797223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 10:02:42.228259 containerd[1591]: time="2025-05-13T10:02:42.228195823Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:42.231672 containerd[1591]: time="2025-05-13T10:02:42.231628800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:42.232219 containerd[1591]: time="2025-05-13T10:02:42.232184114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.039120169s" May 13 10:02:42.232219 containerd[1591]: time="2025-05-13T10:02:42.232216144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 10:02:42.233278 containerd[1591]: time="2025-05-13T10:02:42.233141515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 10:02:42.234447 containerd[1591]: time="2025-05-13T10:02:42.234404520Z" level=info msg="CreateContainer within sandbox \"00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 10:02:42.244786 containerd[1591]: time="2025-05-13T10:02:42.244727355Z" level=info msg="Container 3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:42.253486 containerd[1591]: time="2025-05-13T10:02:42.253444359Z" level=info msg="CreateContainer within sandbox \"00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3\"" May 13 10:02:42.254144 containerd[1591]: time="2025-05-13T10:02:42.254015804Z" level=info msg="StartContainer for \"3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3\"" May 13 10:02:42.255664 containerd[1591]: time="2025-05-13T10:02:42.255631012Z" level=info msg="connecting to shim 3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3" address="unix:///run/containerd/s/e0394ab9daab6f7828cc63a63c08e8f5757a3b44a3cdaa752877004bdc9f781b" protocol=ttrpc version=3 May 13 10:02:42.280262 systemd[1]: Started cri-containerd-3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3.scope - libcontainer container 3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3. May 13 10:02:42.340758 systemd[1]: cri-containerd-3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3.scope: Deactivated successfully. May 13 10:02:42.341195 systemd[1]: cri-containerd-3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3.scope: Consumed 42ms CPU time, 8.3M memory peak, 6.3M written to disk. May 13 10:02:42.342391 containerd[1591]: time="2025-05-13T10:02:42.342342412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3\" id:\"3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3\" pid:3451 exited_at:{seconds:1747130562 nanos:341837652}" May 13 10:02:42.474062 containerd[1591]: time="2025-05-13T10:02:42.473972281Z" level=info msg="received exit event container_id:\"3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3\" id:\"3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3\" pid:3451 exited_at:{seconds:1747130562 nanos:341837652}" May 13 10:02:42.475929 containerd[1591]: time="2025-05-13T10:02:42.475895367Z" level=info msg="StartContainer for \"3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3\" returns successfully" May 13 10:02:42.497823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3077587f95182029a0dba3f35e5489b5baec015fa5292eb77f15d8e72fba85c3-rootfs.mount: Deactivated successfully. May 13 10:02:42.852339 kubelet[2858]: E0513 10:02:42.852017 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:43.776066 kubelet[2858]: E0513 10:02:43.775998 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:45.425024 containerd[1591]: time="2025-05-13T10:02:45.424929363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:45.442937 containerd[1591]: time="2025-05-13T10:02:45.442761891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 10:02:45.451582 containerd[1591]: time="2025-05-13T10:02:45.451536730Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:45.460084 containerd[1591]: time="2025-05-13T10:02:45.460029166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:45.460735 containerd[1591]: time="2025-05-13T10:02:45.460674069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.227503068s" May 13 10:02:45.460735 containerd[1591]: time="2025-05-13T10:02:45.460724564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 10:02:45.461892 containerd[1591]: time="2025-05-13T10:02:45.461841775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 10:02:45.471283 containerd[1591]: time="2025-05-13T10:02:45.471154183Z" level=info msg="CreateContainer within sandbox \"7f3dac7dc6c3bb1c15e19085fa559eb56e200bd6f533f6018ab8347f42090b3d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 10:02:45.528075 containerd[1591]: time="2025-05-13T10:02:45.528028568Z" level=info msg="Container 78c8d1fb2414eaf1cfb7107aa0ccc95d325a6461dc7bd3ca249ec4cea37a06ac: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:45.678820 containerd[1591]: time="2025-05-13T10:02:45.678775029Z" level=info msg="CreateContainer within sandbox \"7f3dac7dc6c3bb1c15e19085fa559eb56e200bd6f533f6018ab8347f42090b3d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"78c8d1fb2414eaf1cfb7107aa0ccc95d325a6461dc7bd3ca249ec4cea37a06ac\"" May 13 10:02:45.679451 containerd[1591]: time="2025-05-13T10:02:45.679380257Z" level=info msg="StartContainer for \"78c8d1fb2414eaf1cfb7107aa0ccc95d325a6461dc7bd3ca249ec4cea37a06ac\"" May 13 10:02:45.680619 containerd[1591]: time="2025-05-13T10:02:45.680574041Z" level=info msg="connecting to shim 78c8d1fb2414eaf1cfb7107aa0ccc95d325a6461dc7bd3ca249ec4cea37a06ac" address="unix:///run/containerd/s/422891c81edac5e8efe83689e69e1d4419b6ff4d6aa590a6bdb44f8b1e25d733" protocol=ttrpc version=3 May 13 10:02:45.709023 systemd[1]: Started cri-containerd-78c8d1fb2414eaf1cfb7107aa0ccc95d325a6461dc7bd3ca249ec4cea37a06ac.scope - libcontainer container 78c8d1fb2414eaf1cfb7107aa0ccc95d325a6461dc7bd3ca249ec4cea37a06ac. May 13 10:02:45.775861 kubelet[2858]: E0513 10:02:45.775774 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:45.855588 containerd[1591]: time="2025-05-13T10:02:45.855547142Z" level=info msg="StartContainer for \"78c8d1fb2414eaf1cfb7107aa0ccc95d325a6461dc7bd3ca249ec4cea37a06ac\" returns successfully" May 13 10:02:45.858161 kubelet[2858]: E0513 10:02:45.858138 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:45.919378 kubelet[2858]: I0513 10:02:45.919328 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57f7bdc654-z8q4f" podStartSLOduration=1.664638845 podStartE2EDuration="6.919313465s" podCreationTimestamp="2025-05-13 10:02:39 +0000 UTC" firstStartedPulling="2025-05-13 10:02:40.20700521 +0000 UTC m=+20.543042456" lastFinishedPulling="2025-05-13 10:02:45.46167983 +0000 UTC m=+25.797717076" observedRunningTime="2025-05-13 10:02:45.919051032 +0000 UTC m=+26.255088268" watchObservedRunningTime="2025-05-13 10:02:45.919313465 +0000 UTC m=+26.255350711" May 13 10:02:46.859255 kubelet[2858]: I0513 10:02:46.859219 2858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 10:02:46.859797 kubelet[2858]: E0513 10:02:46.859773 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:47.775478 kubelet[2858]: E0513 10:02:47.775409 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:49.775963 kubelet[2858]: E0513 10:02:49.775572 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:50.858183 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:49414.service - OpenSSH per-connection server daemon (10.0.0.1:49414). May 13 10:02:51.296175 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 49414 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:51.298053 sshd-session[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:51.304006 systemd-logind[1575]: New session 8 of user core. May 13 10:02:51.309023 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 10:02:51.457069 sshd[3539]: Connection closed by 10.0.0.1 port 49414 May 13 10:02:51.457222 sshd-session[3533]: pam_unix(sshd:session): session closed for user core May 13 10:02:51.461676 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:49414.service: Deactivated successfully. May 13 10:02:51.464136 systemd[1]: session-8.scope: Deactivated successfully. May 13 10:02:51.466440 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. May 13 10:02:51.468094 systemd-logind[1575]: Removed session 8. May 13 10:02:51.778135 kubelet[2858]: E0513 10:02:51.778028 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:52.543134 containerd[1591]: time="2025-05-13T10:02:52.543067616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:52.544060 containerd[1591]: time="2025-05-13T10:02:52.544031808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 10:02:52.545429 containerd[1591]: time="2025-05-13T10:02:52.545376896Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:52.547243 containerd[1591]: time="2025-05-13T10:02:52.547206764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:52.547970 containerd[1591]: time="2025-05-13T10:02:52.547932389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 7.086057521s" May 13 10:02:52.548025 containerd[1591]: time="2025-05-13T10:02:52.547969608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 10:02:52.550053 containerd[1591]: time="2025-05-13T10:02:52.550017857Z" level=info msg="CreateContainer within sandbox \"00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 10:02:52.558777 containerd[1591]: time="2025-05-13T10:02:52.558734829Z" level=info msg="Container cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:52.570804 containerd[1591]: time="2025-05-13T10:02:52.570752473Z" level=info msg="CreateContainer within sandbox \"00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe\"" May 13 10:02:52.571285 containerd[1591]: time="2025-05-13T10:02:52.571247854Z" level=info msg="StartContainer for \"cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe\"" May 13 10:02:52.573265 containerd[1591]: time="2025-05-13T10:02:52.573233024Z" level=info msg="connecting to shim cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe" address="unix:///run/containerd/s/e0394ab9daab6f7828cc63a63c08e8f5757a3b44a3cdaa752877004bdc9f781b" protocol=ttrpc version=3 May 13 10:02:52.595013 systemd[1]: Started cri-containerd-cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe.scope - libcontainer container cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe. May 13 10:02:52.640633 containerd[1591]: time="2025-05-13T10:02:52.640471833Z" level=info msg="StartContainer for \"cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe\" returns successfully" May 13 10:02:52.872897 kubelet[2858]: E0513 10:02:52.872734 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:53.776135 kubelet[2858]: E0513 10:02:53.776059 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:53.852980 systemd[1]: cri-containerd-cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe.scope: Deactivated successfully. May 13 10:02:53.853427 systemd[1]: cri-containerd-cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe.scope: Consumed 561ms CPU time, 161.9M memory peak, 36K read from disk, 154M written to disk. May 13 10:02:53.855042 containerd[1591]: time="2025-05-13T10:02:53.854984161Z" level=info msg="received exit event container_id:\"cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe\" id:\"cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe\" pid:3573 exited_at:{seconds:1747130573 nanos:854656837}" May 13 10:02:53.855500 containerd[1591]: time="2025-05-13T10:02:53.855039436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe\" id:\"cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe\" pid:3573 exited_at:{seconds:1747130573 nanos:854656837}" May 13 10:02:53.876390 kubelet[2858]: E0513 10:02:53.874855 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:53.876370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd2fa90554deb894688e68b22cb4a81c90a6c3e04b31f561fd1b7b443140fbfe-rootfs.mount: Deactivated successfully. May 13 10:02:53.932401 kubelet[2858]: I0513 10:02:53.932352 2858 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 10:02:54.100524 kubelet[2858]: I0513 10:02:54.100373 2858 topology_manager.go:215] "Topology Admit Handler" podUID="8baff794-36a7-48d3-96db-8c84e5de9a94" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9wnm7" May 13 10:02:54.100946 kubelet[2858]: I0513 10:02:54.100819 2858 topology_manager.go:215] "Topology Admit Handler" podUID="8d5b8788-d690-4a64-a79f-7dcf9f48991f" podNamespace="calico-apiserver" podName="calico-apiserver-7597b9c9d9-8lxcx" May 13 10:02:54.101375 kubelet[2858]: I0513 10:02:54.101044 2858 topology_manager.go:215] "Topology Admit Handler" podUID="8cfe200d-9f73-4fa4-9cc4-8c6ccb628016" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k2tdz" May 13 10:02:54.101375 kubelet[2858]: I0513 10:02:54.101227 2858 topology_manager.go:215] "Topology Admit Handler" podUID="115eb9a3-80d8-4fef-9662-417de41e8958" podNamespace="calico-apiserver" podName="calico-apiserver-7597b9c9d9-dv8bp" May 13 10:02:54.101433 kubelet[2858]: I0513 10:02:54.101425 2858 topology_manager.go:215] "Topology Admit Handler" podUID="499fb77c-0fee-4700-88ee-d3e454a651ef" podNamespace="calico-system" podName="calico-kube-controllers-667f6586cc-nnvqm" May 13 10:02:54.111756 systemd[1]: Created slice kubepods-burstable-pod8cfe200d_9f73_4fa4_9cc4_8c6ccb628016.slice - libcontainer container kubepods-burstable-pod8cfe200d_9f73_4fa4_9cc4_8c6ccb628016.slice. May 13 10:02:54.118583 systemd[1]: Created slice kubepods-besteffort-pod8d5b8788_d690_4a64_a79f_7dcf9f48991f.slice - libcontainer container kubepods-besteffort-pod8d5b8788_d690_4a64_a79f_7dcf9f48991f.slice. May 13 10:02:54.126109 systemd[1]: Created slice kubepods-besteffort-pod499fb77c_0fee_4700_88ee_d3e454a651ef.slice - libcontainer container kubepods-besteffort-pod499fb77c_0fee_4700_88ee_d3e454a651ef.slice. May 13 10:02:54.131842 systemd[1]: Created slice kubepods-burstable-pod8baff794_36a7_48d3_96db_8c84e5de9a94.slice - libcontainer container kubepods-burstable-pod8baff794_36a7_48d3_96db_8c84e5de9a94.slice. May 13 10:02:54.137962 systemd[1]: Created slice kubepods-besteffort-pod115eb9a3_80d8_4fef_9662_417de41e8958.slice - libcontainer container kubepods-besteffort-pod115eb9a3_80d8_4fef_9662_417de41e8958.slice. May 13 10:02:54.274327 kubelet[2858]: I0513 10:02:54.274267 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4z6\" (UniqueName: \"kubernetes.io/projected/8cfe200d-9f73-4fa4-9cc4-8c6ccb628016-kube-api-access-6h4z6\") pod \"coredns-7db6d8ff4d-k2tdz\" (UID: \"8cfe200d-9f73-4fa4-9cc4-8c6ccb628016\") " pod="kube-system/coredns-7db6d8ff4d-k2tdz" May 13 10:02:54.274327 kubelet[2858]: I0513 10:02:54.274316 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvn7\" (UniqueName: \"kubernetes.io/projected/8d5b8788-d690-4a64-a79f-7dcf9f48991f-kube-api-access-tzvn7\") pod \"calico-apiserver-7597b9c9d9-8lxcx\" (UID: \"8d5b8788-d690-4a64-a79f-7dcf9f48991f\") " pod="calico-apiserver/calico-apiserver-7597b9c9d9-8lxcx" May 13 10:02:54.274327 kubelet[2858]: I0513 10:02:54.274337 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvh8z\" (UniqueName: \"kubernetes.io/projected/8baff794-36a7-48d3-96db-8c84e5de9a94-kube-api-access-kvh8z\") pod \"coredns-7db6d8ff4d-9wnm7\" (UID: \"8baff794-36a7-48d3-96db-8c84e5de9a94\") " pod="kube-system/coredns-7db6d8ff4d-9wnm7" May 13 10:02:54.274556 kubelet[2858]: I0513 10:02:54.274353 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cfe200d-9f73-4fa4-9cc4-8c6ccb628016-config-volume\") pod \"coredns-7db6d8ff4d-k2tdz\" (UID: \"8cfe200d-9f73-4fa4-9cc4-8c6ccb628016\") " pod="kube-system/coredns-7db6d8ff4d-k2tdz" May 13 10:02:54.274556 kubelet[2858]: I0513 10:02:54.274370 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d5b8788-d690-4a64-a79f-7dcf9f48991f-calico-apiserver-certs\") pod \"calico-apiserver-7597b9c9d9-8lxcx\" (UID: \"8d5b8788-d690-4a64-a79f-7dcf9f48991f\") " pod="calico-apiserver/calico-apiserver-7597b9c9d9-8lxcx" May 13 10:02:54.274556 kubelet[2858]: I0513 10:02:54.274388 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhzbf\" (UniqueName: \"kubernetes.io/projected/115eb9a3-80d8-4fef-9662-417de41e8958-kube-api-access-qhzbf\") pod \"calico-apiserver-7597b9c9d9-dv8bp\" (UID: \"115eb9a3-80d8-4fef-9662-417de41e8958\") " pod="calico-apiserver/calico-apiserver-7597b9c9d9-dv8bp" May 13 10:02:54.274556 kubelet[2858]: I0513 10:02:54.274408 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j89p\" (UniqueName: \"kubernetes.io/projected/499fb77c-0fee-4700-88ee-d3e454a651ef-kube-api-access-6j89p\") pod \"calico-kube-controllers-667f6586cc-nnvqm\" (UID: \"499fb77c-0fee-4700-88ee-d3e454a651ef\") " pod="calico-system/calico-kube-controllers-667f6586cc-nnvqm" May 13 10:02:54.274556 kubelet[2858]: I0513 10:02:54.274435 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/115eb9a3-80d8-4fef-9662-417de41e8958-calico-apiserver-certs\") pod \"calico-apiserver-7597b9c9d9-dv8bp\" (UID: \"115eb9a3-80d8-4fef-9662-417de41e8958\") " pod="calico-apiserver/calico-apiserver-7597b9c9d9-dv8bp" May 13 10:02:54.274686 kubelet[2858]: I0513 10:02:54.274458 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8baff794-36a7-48d3-96db-8c84e5de9a94-config-volume\") pod \"coredns-7db6d8ff4d-9wnm7\" (UID: \"8baff794-36a7-48d3-96db-8c84e5de9a94\") " pod="kube-system/coredns-7db6d8ff4d-9wnm7" May 13 10:02:54.274686 kubelet[2858]: I0513 10:02:54.274476 2858 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/499fb77c-0fee-4700-88ee-d3e454a651ef-tigera-ca-bundle\") pod \"calico-kube-controllers-667f6586cc-nnvqm\" (UID: \"499fb77c-0fee-4700-88ee-d3e454a651ef\") " pod="calico-system/calico-kube-controllers-667f6586cc-nnvqm" May 13 10:02:54.415725 kubelet[2858]: E0513 10:02:54.415613 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:54.416851 containerd[1591]: time="2025-05-13T10:02:54.416792339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k2tdz,Uid:8cfe200d-9f73-4fa4-9cc4-8c6ccb628016,Namespace:kube-system,Attempt:0,}" May 13 10:02:54.422773 containerd[1591]: time="2025-05-13T10:02:54.422735287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7597b9c9d9-8lxcx,Uid:8d5b8788-d690-4a64-a79f-7dcf9f48991f,Namespace:calico-apiserver,Attempt:0,}" May 13 10:02:54.428980 containerd[1591]: time="2025-05-13T10:02:54.428940888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667f6586cc-nnvqm,Uid:499fb77c-0fee-4700-88ee-d3e454a651ef,Namespace:calico-system,Attempt:0,}" May 13 10:02:54.435354 kubelet[2858]: E0513 10:02:54.435314 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:54.435758 containerd[1591]: time="2025-05-13T10:02:54.435734173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9wnm7,Uid:8baff794-36a7-48d3-96db-8c84e5de9a94,Namespace:kube-system,Attempt:0,}" May 13 10:02:54.441039 containerd[1591]: time="2025-05-13T10:02:54.441003174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7597b9c9d9-dv8bp,Uid:115eb9a3-80d8-4fef-9662-417de41e8958,Namespace:calico-apiserver,Attempt:0,}" May 13 10:02:54.512258 containerd[1591]: time="2025-05-13T10:02:54.512179067Z" level=error msg="Failed to destroy network for sandbox \"729a8e002a03515b0d73219fe54d38c46e1b382200ec8da598563138715b9594\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.515040 containerd[1591]: time="2025-05-13T10:02:54.515015407Z" level=error msg="Failed to destroy network for sandbox \"43bb460df9fabe92417ce3d3486cbc761f6b8eebd75efd0fe1be34e50c29dac2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.518107 containerd[1591]: time="2025-05-13T10:02:54.518075427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667f6586cc-nnvqm,Uid:499fb77c-0fee-4700-88ee-d3e454a651ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"729a8e002a03515b0d73219fe54d38c46e1b382200ec8da598563138715b9594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.518658 kubelet[2858]: E0513 10:02:54.518432 2858 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729a8e002a03515b0d73219fe54d38c46e1b382200ec8da598563138715b9594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.518658 kubelet[2858]: E0513 10:02:54.518506 2858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729a8e002a03515b0d73219fe54d38c46e1b382200ec8da598563138715b9594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-667f6586cc-nnvqm" May 13 10:02:54.518658 kubelet[2858]: E0513 10:02:54.518528 2858 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729a8e002a03515b0d73219fe54d38c46e1b382200ec8da598563138715b9594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-667f6586cc-nnvqm" May 13 10:02:54.518853 kubelet[2858]: E0513 10:02:54.518630 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-667f6586cc-nnvqm_calico-system(499fb77c-0fee-4700-88ee-d3e454a651ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-667f6586cc-nnvqm_calico-system(499fb77c-0fee-4700-88ee-d3e454a651ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"729a8e002a03515b0d73219fe54d38c46e1b382200ec8da598563138715b9594\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-667f6586cc-nnvqm" podUID="499fb77c-0fee-4700-88ee-d3e454a651ef" May 13 10:02:54.519978 containerd[1591]: time="2025-05-13T10:02:54.519765223Z" level=error msg="Failed to destroy network for sandbox \"d94a34b16b792fc2462d7daffa96b57eae4a2c1c5fdbab1303392c60848b55fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.520408 containerd[1591]: time="2025-05-13T10:02:54.520215658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k2tdz,Uid:8cfe200d-9f73-4fa4-9cc4-8c6ccb628016,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43bb460df9fabe92417ce3d3486cbc761f6b8eebd75efd0fe1be34e50c29dac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.520688 kubelet[2858]: E0513 10:02:54.520547 2858 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43bb460df9fabe92417ce3d3486cbc761f6b8eebd75efd0fe1be34e50c29dac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.520688 kubelet[2858]: E0513 10:02:54.520602 2858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43bb460df9fabe92417ce3d3486cbc761f6b8eebd75efd0fe1be34e50c29dac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k2tdz" May 13 10:02:54.520688 kubelet[2858]: E0513 10:02:54.520623 2858 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43bb460df9fabe92417ce3d3486cbc761f6b8eebd75efd0fe1be34e50c29dac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k2tdz" May 13 10:02:54.520941 kubelet[2858]: E0513 10:02:54.520659 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-k2tdz_kube-system(8cfe200d-9f73-4fa4-9cc4-8c6ccb628016)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-k2tdz_kube-system(8cfe200d-9f73-4fa4-9cc4-8c6ccb628016)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43bb460df9fabe92417ce3d3486cbc761f6b8eebd75efd0fe1be34e50c29dac2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k2tdz" podUID="8cfe200d-9f73-4fa4-9cc4-8c6ccb628016" May 13 10:02:54.520999 containerd[1591]: time="2025-05-13T10:02:54.520860451Z" level=error msg="Failed to destroy network for sandbox \"0bbfbdb6d4209d3e0eff86182b4154fb045dc37733420c09bb0c59e2f359cb91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.521688 containerd[1591]: time="2025-05-13T10:02:54.521649793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7597b9c9d9-8lxcx,Uid:8d5b8788-d690-4a64-a79f-7dcf9f48991f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d94a34b16b792fc2462d7daffa96b57eae4a2c1c5fdbab1303392c60848b55fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.521855 kubelet[2858]: E0513 10:02:54.521786 2858 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d94a34b16b792fc2462d7daffa96b57eae4a2c1c5fdbab1303392c60848b55fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.522148 kubelet[2858]: E0513 10:02:54.521865 2858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d94a34b16b792fc2462d7daffa96b57eae4a2c1c5fdbab1303392c60848b55fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7597b9c9d9-8lxcx" May 13 10:02:54.522148 kubelet[2858]: E0513 10:02:54.521926 2858 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d94a34b16b792fc2462d7daffa96b57eae4a2c1c5fdbab1303392c60848b55fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7597b9c9d9-8lxcx" May 13 10:02:54.522148 kubelet[2858]: E0513 10:02:54.521958 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7597b9c9d9-8lxcx_calico-apiserver(8d5b8788-d690-4a64-a79f-7dcf9f48991f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7597b9c9d9-8lxcx_calico-apiserver(8d5b8788-d690-4a64-a79f-7dcf9f48991f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d94a34b16b792fc2462d7daffa96b57eae4a2c1c5fdbab1303392c60848b55fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7597b9c9d9-8lxcx" podUID="8d5b8788-d690-4a64-a79f-7dcf9f48991f" May 13 10:02:54.522258 containerd[1591]: time="2025-05-13T10:02:54.521992768Z" level=error msg="Failed to destroy network for sandbox \"14f1c1fadbb0f78cf77f9759150cfb52f662b76dbc11f3e8041fd6a223911ee1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.523012 containerd[1591]: time="2025-05-13T10:02:54.522948865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7597b9c9d9-dv8bp,Uid:115eb9a3-80d8-4fef-9662-417de41e8958,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbfbdb6d4209d3e0eff86182b4154fb045dc37733420c09bb0c59e2f359cb91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.523219 kubelet[2858]: E0513 10:02:54.523124 2858 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbfbdb6d4209d3e0eff86182b4154fb045dc37733420c09bb0c59e2f359cb91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.523219 kubelet[2858]: E0513 10:02:54.523162 2858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbfbdb6d4209d3e0eff86182b4154fb045dc37733420c09bb0c59e2f359cb91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7597b9c9d9-dv8bp" May 13 10:02:54.523219 kubelet[2858]: E0513 10:02:54.523178 2858 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbfbdb6d4209d3e0eff86182b4154fb045dc37733420c09bb0c59e2f359cb91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7597b9c9d9-dv8bp" May 13 10:02:54.523326 kubelet[2858]: E0513 10:02:54.523230 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7597b9c9d9-dv8bp_calico-apiserver(115eb9a3-80d8-4fef-9662-417de41e8958)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7597b9c9d9-dv8bp_calico-apiserver(115eb9a3-80d8-4fef-9662-417de41e8958)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bbfbdb6d4209d3e0eff86182b4154fb045dc37733420c09bb0c59e2f359cb91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7597b9c9d9-dv8bp" podUID="115eb9a3-80d8-4fef-9662-417de41e8958" May 13 10:02:54.524116 containerd[1591]: time="2025-05-13T10:02:54.524081232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9wnm7,Uid:8baff794-36a7-48d3-96db-8c84e5de9a94,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f1c1fadbb0f78cf77f9759150cfb52f662b76dbc11f3e8041fd6a223911ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.524457 kubelet[2858]: E0513 10:02:54.524417 2858 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f1c1fadbb0f78cf77f9759150cfb52f662b76dbc11f3e8041fd6a223911ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:54.524598 kubelet[2858]: E0513 10:02:54.524482 2858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f1c1fadbb0f78cf77f9759150cfb52f662b76dbc11f3e8041fd6a223911ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9wnm7" May 13 10:02:54.524598 kubelet[2858]: E0513 10:02:54.524499 2858 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f1c1fadbb0f78cf77f9759150cfb52f662b76dbc11f3e8041fd6a223911ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9wnm7" May 13 10:02:54.524598 kubelet[2858]: E0513 10:02:54.524557 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9wnm7_kube-system(8baff794-36a7-48d3-96db-8c84e5de9a94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9wnm7_kube-system(8baff794-36a7-48d3-96db-8c84e5de9a94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14f1c1fadbb0f78cf77f9759150cfb52f662b76dbc11f3e8041fd6a223911ee1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9wnm7" podUID="8baff794-36a7-48d3-96db-8c84e5de9a94" May 13 10:02:54.880291 kubelet[2858]: E0513 10:02:54.880249 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:54.881226 containerd[1591]: time="2025-05-13T10:02:54.881164265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 10:02:55.782262 systemd[1]: Created slice kubepods-besteffort-pod7ce333ae_3ee7_43d5_a75e_02f0517a7db5.slice - libcontainer container kubepods-besteffort-pod7ce333ae_3ee7_43d5_a75e_02f0517a7db5.slice. May 13 10:02:55.785419 containerd[1591]: time="2025-05-13T10:02:55.785374926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-thf6f,Uid:7ce333ae-3ee7-43d5-a75e-02f0517a7db5,Namespace:calico-system,Attempt:0,}" May 13 10:02:55.894756 containerd[1591]: time="2025-05-13T10:02:55.894520748Z" level=error msg="Failed to destroy network for sandbox \"7543da38f923e518217116bccd1f083eef28271f6eaaf2bef5ca0aa05b279789\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:55.896635 containerd[1591]: time="2025-05-13T10:02:55.896573635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-thf6f,Uid:7ce333ae-3ee7-43d5-a75e-02f0517a7db5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7543da38f923e518217116bccd1f083eef28271f6eaaf2bef5ca0aa05b279789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:55.897010 kubelet[2858]: E0513 10:02:55.896936 2858 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7543da38f923e518217116bccd1f083eef28271f6eaaf2bef5ca0aa05b279789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:02:55.897343 kubelet[2858]: E0513 10:02:55.897035 2858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7543da38f923e518217116bccd1f083eef28271f6eaaf2bef5ca0aa05b279789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-thf6f" May 13 10:02:55.897343 kubelet[2858]: E0513 10:02:55.897070 2858 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7543da38f923e518217116bccd1f083eef28271f6eaaf2bef5ca0aa05b279789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-thf6f" May 13 10:02:55.897343 kubelet[2858]: E0513 10:02:55.897136 2858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-thf6f_calico-system(7ce333ae-3ee7-43d5-a75e-02f0517a7db5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-thf6f_calico-system(7ce333ae-3ee7-43d5-a75e-02f0517a7db5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7543da38f923e518217116bccd1f083eef28271f6eaaf2bef5ca0aa05b279789\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-thf6f" podUID="7ce333ae-3ee7-43d5-a75e-02f0517a7db5" May 13 10:02:55.897280 systemd[1]: run-netns-cni\x2d18e23264\x2d5d5a\x2d1c03\x2dd933\x2dc8333bda640a.mount: Deactivated successfully. May 13 10:02:56.477555 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:45796.service - OpenSSH per-connection server daemon (10.0.0.1:45796). May 13 10:02:56.534398 sshd[3829]: Accepted publickey for core from 10.0.0.1 port 45796 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:56.536431 sshd-session[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:56.541383 systemd-logind[1575]: New session 9 of user core. May 13 10:02:56.547989 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 10:02:56.571981 kubelet[2858]: I0513 10:02:56.571947 2858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 10:02:56.575633 kubelet[2858]: E0513 10:02:56.575425 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:56.661516 sshd[3831]: Connection closed by 10.0.0.1 port 45796 May 13 10:02:56.661920 sshd-session[3829]: pam_unix(sshd:session): session closed for user core May 13 10:02:56.666720 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:45796.service: Deactivated successfully. May 13 10:02:56.668823 systemd[1]: session-9.scope: Deactivated successfully. May 13 10:02:56.669636 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. May 13 10:02:56.670856 systemd-logind[1575]: Removed session 9. May 13 10:02:56.884145 kubelet[2858]: E0513 10:02:56.884004 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:01.687146 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:45802.service - OpenSSH per-connection server daemon (10.0.0.1:45802). May 13 10:03:01.777184 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 45802 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:01.779367 sshd-session[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:01.786348 systemd-logind[1575]: New session 10 of user core. May 13 10:03:01.796145 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 10:03:01.920144 sshd[3853]: Connection closed by 10.0.0.1 port 45802 May 13 10:03:01.921047 sshd-session[3851]: pam_unix(sshd:session): session closed for user core May 13 10:03:01.925989 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:45802.service: Deactivated successfully. May 13 10:03:01.928588 systemd[1]: session-10.scope: Deactivated successfully. May 13 10:03:01.929773 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. May 13 10:03:01.931314 systemd-logind[1575]: Removed session 10. May 13 10:03:02.091604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581935831.mount: Deactivated successfully. May 13 10:03:05.410611 containerd[1591]: time="2025-05-13T10:03:05.410544368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:05.456264 containerd[1591]: time="2025-05-13T10:03:05.456210043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 10:03:05.459935 containerd[1591]: time="2025-05-13T10:03:05.459842005Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:05.464290 containerd[1591]: time="2025-05-13T10:03:05.464244254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:05.464666 containerd[1591]: time="2025-05-13T10:03:05.464634698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 10.583404089s" May 13 10:03:05.464666 containerd[1591]: time="2025-05-13T10:03:05.464663592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 10:03:05.489782 containerd[1591]: time="2025-05-13T10:03:05.489743621Z" level=info msg="CreateContainer within sandbox \"00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 10:03:05.555299 containerd[1591]: time="2025-05-13T10:03:05.555229125Z" level=info msg="Container 4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:05.568305 containerd[1591]: time="2025-05-13T10:03:05.568240976Z" level=info msg="CreateContainer within sandbox \"00938ed3a50d4a461e1a13da772ce99518ae3cb1985c4ae0a8d77834af6d8ba5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e\"" May 13 10:03:05.572178 containerd[1591]: time="2025-05-13T10:03:05.572129219Z" level=info msg="StartContainer for \"4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e\"" May 13 10:03:05.573503 containerd[1591]: time="2025-05-13T10:03:05.573456763Z" level=info msg="connecting to shim 4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e" address="unix:///run/containerd/s/e0394ab9daab6f7828cc63a63c08e8f5757a3b44a3cdaa752877004bdc9f781b" protocol=ttrpc version=3 May 13 10:03:05.600038 systemd[1]: Started cri-containerd-4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e.scope - libcontainer container 4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e. May 13 10:03:05.647770 containerd[1591]: time="2025-05-13T10:03:05.647725395Z" level=info msg="StartContainer for \"4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e\" returns successfully" May 13 10:03:05.707847 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 10:03:05.708047 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 10:03:05.786582 kubelet[2858]: E0513 10:03:05.786539 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:05.791997 containerd[1591]: time="2025-05-13T10:03:05.791937374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7597b9c9d9-8lxcx,Uid:8d5b8788-d690-4a64-a79f-7dcf9f48991f,Namespace:calico-apiserver,Attempt:0,}" May 13 10:03:05.792137 containerd[1591]: time="2025-05-13T10:03:05.792088859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9wnm7,Uid:8baff794-36a7-48d3-96db-8c84e5de9a94,Namespace:kube-system,Attempt:0,}" May 13 10:03:05.919020 kubelet[2858]: E0513 10:03:05.918706 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:05.942740 kubelet[2858]: I0513 10:03:05.941895 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5qpth" podStartSLOduration=1.6655090380000002 podStartE2EDuration="26.941861684s" podCreationTimestamp="2025-05-13 10:02:39 +0000 UTC" firstStartedPulling="2025-05-13 10:02:40.192702334 +0000 UTC m=+20.528739581" lastFinishedPulling="2025-05-13 10:03:05.469054981 +0000 UTC m=+45.805092227" observedRunningTime="2025-05-13 10:03:05.940669785 +0000 UTC m=+46.276707041" watchObservedRunningTime="2025-05-13 10:03:05.941861684 +0000 UTC m=+46.277898930" May 13 10:03:06.006277 systemd-networkd[1496]: cali964f0843e23: Link UP May 13 10:03:06.006532 systemd-networkd[1496]: cali964f0843e23: Gained carrier May 13 10:03:06.024672 containerd[1591]: 2025-05-13 10:03:05.831 [INFO][3922] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 10:03:06.024672 containerd[1591]: 2025-05-13 10:03:05.845 [INFO][3922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0 coredns-7db6d8ff4d- kube-system 8baff794-36a7-48d3-96db-8c84e5de9a94 751 0 2025-05-13 10:02:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-9wnm7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali964f0843e23 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9wnm7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9wnm7-" May 13 10:03:06.024672 containerd[1591]: 2025-05-13 10:03:05.845 [INFO][3922] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9wnm7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" May 13 10:03:06.024672 containerd[1591]: 2025-05-13 10:03:05.930 [INFO][3958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" HandleID="k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Workload="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.945 [INFO][3958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" HandleID="k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Workload="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000513e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-9wnm7", "timestamp":"2025-05-13 10:03:05.930835603 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.945 [INFO][3958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.945 [INFO][3958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.946 [INFO][3958] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.948 [INFO][3958] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" host="localhost" May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.956 [INFO][3958] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.963 [INFO][3958] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.967 [INFO][3958] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.971 [INFO][3958] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:06.025266 containerd[1591]: 2025-05-13 10:03:05.972 [INFO][3958] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" host="localhost" May 13 10:03:06.025628 containerd[1591]: 2025-05-13 10:03:05.974 [INFO][3958] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798 May 13 10:03:06.025628 containerd[1591]: 2025-05-13 10:03:05.978 [INFO][3958] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" host="localhost" May 13 10:03:06.025628 containerd[1591]: 2025-05-13 10:03:05.985 [INFO][3958] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" host="localhost" May 13 10:03:06.025628 containerd[1591]: 2025-05-13 10:03:05.985 [INFO][3958] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" host="localhost" May 13 10:03:06.025628 containerd[1591]: 2025-05-13 10:03:05.985 [INFO][3958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:06.025628 containerd[1591]: 2025-05-13 10:03:05.985 [INFO][3958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" HandleID="k8s-pod-network.2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Workload="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" May 13 10:03:06.025900 containerd[1591]: 2025-05-13 10:03:05.988 [INFO][3922] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9wnm7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8baff794-36a7-48d3-96db-8c84e5de9a94", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-9wnm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali964f0843e23", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:06.025975 containerd[1591]: 2025-05-13 10:03:05.993 [INFO][3922] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9wnm7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" May 13 10:03:06.025975 containerd[1591]: 2025-05-13 10:03:05.993 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali964f0843e23 ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9wnm7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" May 13 10:03:06.025975 containerd[1591]: 2025-05-13 10:03:06.005 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9wnm7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" May 13 10:03:06.026085 containerd[1591]: 2025-05-13 10:03:06.008 [INFO][3922] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9wnm7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8baff794-36a7-48d3-96db-8c84e5de9a94", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798", Pod:"coredns-7db6d8ff4d-9wnm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali964f0843e23", MAC:"92:52:57:ef:dd:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:06.026085 containerd[1591]: 2025-05-13 10:03:06.021 [INFO][3922] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9wnm7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9wnm7-eth0" May 13 10:03:06.038015 systemd-networkd[1496]: cali48f7b0a1a6e: Link UP May 13 10:03:06.038656 systemd-networkd[1496]: cali48f7b0a1a6e: Gained carrier May 13 10:03:06.040334 containerd[1591]: time="2025-05-13T10:03:06.040272993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e\" id:\"3e879ac14172dc71a86770fd668c5c5cff741a6d90cd73ee4da9648d810bd207\" pid:3984 exit_status:1 exited_at:{seconds:1747130586 nanos:36212255}" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.827 [INFO][3935] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.844 [INFO][3935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0 calico-apiserver-7597b9c9d9- calico-apiserver 8d5b8788-d690-4a64-a79f-7dcf9f48991f 756 0 2025-05-13 10:02:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7597b9c9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7597b9c9d9-8lxcx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali48f7b0a1a6e [] []}} ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-8lxcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.845 [INFO][3935] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-8lxcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.930 [INFO][3956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" HandleID="k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Workload="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.945 [INFO][3956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" HandleID="k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Workload="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013ac00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7597b9c9d9-8lxcx", "timestamp":"2025-05-13 10:03:05.930893932 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.945 [INFO][3956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.985 [INFO][3956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.985 [INFO][3956] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.988 [INFO][3956] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:05.993 [INFO][3956] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.002 [INFO][3956] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.005 [INFO][3956] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.010 [INFO][3956] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.010 [INFO][3956] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.011 [INFO][3956] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.018 [INFO][3956] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.027 [INFO][3956] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.028 [INFO][3956] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" host="localhost" May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.028 [INFO][3956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:06.053421 containerd[1591]: 2025-05-13 10:03:06.028 [INFO][3956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" HandleID="k8s-pod-network.d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Workload="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" May 13 10:03:06.054181 containerd[1591]: 2025-05-13 10:03:06.034 [INFO][3935] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-8lxcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0", GenerateName:"calico-apiserver-7597b9c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d5b8788-d690-4a64-a79f-7dcf9f48991f", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7597b9c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7597b9c9d9-8lxcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48f7b0a1a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:06.054181 containerd[1591]: 2025-05-13 10:03:06.035 [INFO][3935] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-8lxcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" May 13 10:03:06.054181 containerd[1591]: 2025-05-13 10:03:06.035 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48f7b0a1a6e ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-8lxcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" May 13 10:03:06.054181 containerd[1591]: 2025-05-13 10:03:06.037 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-8lxcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" May 13 10:03:06.054181 containerd[1591]: 2025-05-13 10:03:06.037 [INFO][3935] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-8lxcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0", GenerateName:"calico-apiserver-7597b9c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d5b8788-d690-4a64-a79f-7dcf9f48991f", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7597b9c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f", Pod:"calico-apiserver-7597b9c9d9-8lxcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48f7b0a1a6e", MAC:"da:27:b0:17:21:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:06.054181 containerd[1591]: 2025-05-13 10:03:06.050 [INFO][3935] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-8lxcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--8lxcx-eth0" May 13 10:03:06.409027 containerd[1591]: time="2025-05-13T10:03:06.408546249Z" level=info msg="connecting to shim 2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798" address="unix:///run/containerd/s/8746c5c46b060e6171ad835d77f9e239b2fc8fd43ab841c5de276f87eea775e5" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:06.410803 containerd[1591]: time="2025-05-13T10:03:06.410663106Z" level=info msg="connecting to shim d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f" address="unix:///run/containerd/s/fe04b13fb1724b122f8d3f34a3bd29df500109561f74175e0b8ce05547ad9789" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:06.439078 systemd[1]: Started cri-containerd-2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798.scope - libcontainer container 2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798. May 13 10:03:06.444134 systemd[1]: Started cri-containerd-d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f.scope - libcontainer container d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f. May 13 10:03:06.457414 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:06.462124 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:06.497206 containerd[1591]: time="2025-05-13T10:03:06.497163980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9wnm7,Uid:8baff794-36a7-48d3-96db-8c84e5de9a94,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798\"" May 13 10:03:06.507221 kubelet[2858]: E0513 10:03:06.507154 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:06.510486 containerd[1591]: time="2025-05-13T10:03:06.510409930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7597b9c9d9-8lxcx,Uid:8d5b8788-d690-4a64-a79f-7dcf9f48991f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f\"" May 13 10:03:06.510486 containerd[1591]: time="2025-05-13T10:03:06.510439205Z" level=info msg="CreateContainer within sandbox \"2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 10:03:06.512302 containerd[1591]: time="2025-05-13T10:03:06.512271857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 10:03:06.620226 containerd[1591]: time="2025-05-13T10:03:06.620161957Z" level=info msg="Container 69bd8261db6ccb009c5881304a5bf01f7b6f10d848b8b67706b0329eff6b52be: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:06.620860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276723784.mount: Deactivated successfully. May 13 10:03:06.623457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797800111.mount: Deactivated successfully. May 13 10:03:06.776281 containerd[1591]: time="2025-05-13T10:03:06.776220452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667f6586cc-nnvqm,Uid:499fb77c-0fee-4700-88ee-d3e454a651ef,Namespace:calico-system,Attempt:0,}" May 13 10:03:06.880575 containerd[1591]: time="2025-05-13T10:03:06.880524637Z" level=info msg="CreateContainer within sandbox \"2ff4ec9140e62d94d71c41981c1e1635de4840f948386626394095c922270798\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69bd8261db6ccb009c5881304a5bf01f7b6f10d848b8b67706b0329eff6b52be\"" May 13 10:03:06.881038 containerd[1591]: time="2025-05-13T10:03:06.881013875Z" level=info msg="StartContainer for \"69bd8261db6ccb009c5881304a5bf01f7b6f10d848b8b67706b0329eff6b52be\"" May 13 10:03:06.881856 containerd[1591]: time="2025-05-13T10:03:06.881812124Z" level=info msg="connecting to shim 69bd8261db6ccb009c5881304a5bf01f7b6f10d848b8b67706b0329eff6b52be" address="unix:///run/containerd/s/8746c5c46b060e6171ad835d77f9e239b2fc8fd43ab841c5de276f87eea775e5" protocol=ttrpc version=3 May 13 10:03:06.911292 systemd[1]: Started cri-containerd-69bd8261db6ccb009c5881304a5bf01f7b6f10d848b8b67706b0329eff6b52be.scope - libcontainer container 69bd8261db6ccb009c5881304a5bf01f7b6f10d848b8b67706b0329eff6b52be. May 13 10:03:06.924679 kubelet[2858]: E0513 10:03:06.924641 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:06.935069 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:37302.service - OpenSSH per-connection server daemon (10.0.0.1:37302). May 13 10:03:07.001691 containerd[1591]: time="2025-05-13T10:03:07.001637293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e\" id:\"078fec4da02f05d273055bb25192ab94b4b25c2c7691098662a81c9555e6fe39\" pid:4158 exit_status:1 exited_at:{seconds:1747130587 nanos:1255776}" May 13 10:03:07.010077 containerd[1591]: time="2025-05-13T10:03:07.010003015Z" level=info msg="StartContainer for \"69bd8261db6ccb009c5881304a5bf01f7b6f10d848b8b67706b0329eff6b52be\" returns successfully" May 13 10:03:07.020409 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 37302 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:07.022498 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:07.029180 systemd-logind[1575]: New session 11 of user core. May 13 10:03:07.034128 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 10:03:07.144738 systemd-networkd[1496]: calic179866b548: Link UP May 13 10:03:07.146815 systemd-networkd[1496]: calic179866b548: Gained carrier May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.010 [INFO][4175] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.021 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0 calico-kube-controllers-667f6586cc- calico-system 499fb77c-0fee-4700-88ee-d3e454a651ef 757 0 2025-05-13 10:02:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:667f6586cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-667f6586cc-nnvqm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic179866b548 [] []}} ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Namespace="calico-system" Pod="calico-kube-controllers-667f6586cc-nnvqm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.021 [INFO][4175] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Namespace="calico-system" Pod="calico-kube-controllers-667f6586cc-nnvqm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.071 [INFO][4194] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" HandleID="k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Workload="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.086 [INFO][4194] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" HandleID="k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Workload="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000317a80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-667f6586cc-nnvqm", "timestamp":"2025-05-13 10:03:07.071062191 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.086 [INFO][4194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.086 [INFO][4194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.086 [INFO][4194] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.088 [INFO][4194] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.094 [INFO][4194] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.099 [INFO][4194] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.100 [INFO][4194] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.103 [INFO][4194] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.103 [INFO][4194] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.105 [INFO][4194] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293 May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.114 [INFO][4194] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.129 [INFO][4194] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.129 [INFO][4194] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" host="localhost" May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.129 [INFO][4194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:07.260884 containerd[1591]: 2025-05-13 10:03:07.129 [INFO][4194] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" HandleID="k8s-pod-network.227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Workload="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" May 13 10:03:07.262390 containerd[1591]: 2025-05-13 10:03:07.136 [INFO][4175] cni-plugin/k8s.go 386: Populated endpoint ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Namespace="calico-system" Pod="calico-kube-controllers-667f6586cc-nnvqm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0", GenerateName:"calico-kube-controllers-667f6586cc-", Namespace:"calico-system", SelfLink:"", UID:"499fb77c-0fee-4700-88ee-d3e454a651ef", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"667f6586cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-667f6586cc-nnvqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic179866b548", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:07.262390 containerd[1591]: 2025-05-13 10:03:07.136 [INFO][4175] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Namespace="calico-system" Pod="calico-kube-controllers-667f6586cc-nnvqm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" May 13 10:03:07.262390 containerd[1591]: 2025-05-13 10:03:07.136 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic179866b548 ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Namespace="calico-system" Pod="calico-kube-controllers-667f6586cc-nnvqm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" May 13 10:03:07.262390 containerd[1591]: 2025-05-13 10:03:07.146 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Namespace="calico-system" Pod="calico-kube-controllers-667f6586cc-nnvqm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" May 13 10:03:07.262390 containerd[1591]: 2025-05-13 10:03:07.149 [INFO][4175] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Namespace="calico-system" Pod="calico-kube-controllers-667f6586cc-nnvqm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0", GenerateName:"calico-kube-controllers-667f6586cc-", Namespace:"calico-system", SelfLink:"", UID:"499fb77c-0fee-4700-88ee-d3e454a651ef", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"667f6586cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293", Pod:"calico-kube-controllers-667f6586cc-nnvqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic179866b548", MAC:"d6:65:72:46:34:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:07.262390 containerd[1591]: 2025-05-13 10:03:07.256 [INFO][4175] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" Namespace="calico-system" Pod="calico-kube-controllers-667f6586cc-nnvqm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667f6586cc--nnvqm-eth0" May 13 10:03:07.414332 sshd[4208]: Connection closed by 10.0.0.1 port 37302 May 13 10:03:07.414175 sshd-session[4154]: pam_unix(sshd:session): session closed for user core May 13 10:03:07.422723 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:37302.service: Deactivated successfully. May 13 10:03:07.428027 containerd[1591]: time="2025-05-13T10:03:07.426055129Z" level=info msg="connecting to shim 227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293" address="unix:///run/containerd/s/24d40eadfbefafde50f75e5a60f2e74fc1c7b9c2e207d730eb00f7d38577606f" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:07.431173 systemd[1]: session-11.scope: Deactivated successfully. May 13 10:03:07.436768 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. May 13 10:03:07.442608 systemd-logind[1575]: Removed session 11. May 13 10:03:07.481077 systemd[1]: Started cri-containerd-227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293.scope - libcontainer container 227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293. May 13 10:03:07.497056 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:07.647540 systemd-networkd[1496]: vxlan.calico: Link UP May 13 10:03:07.647551 systemd-networkd[1496]: vxlan.calico: Gained carrier May 13 10:03:07.653821 containerd[1591]: time="2025-05-13T10:03:07.653757032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667f6586cc-nnvqm,Uid:499fb77c-0fee-4700-88ee-d3e454a651ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293\"" May 13 10:03:07.660030 systemd-networkd[1496]: cali48f7b0a1a6e: Gained IPv6LL May 13 10:03:07.776430 kubelet[2858]: E0513 10:03:07.776384 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:07.777698 containerd[1591]: time="2025-05-13T10:03:07.777119559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k2tdz,Uid:8cfe200d-9f73-4fa4-9cc4-8c6ccb628016,Namespace:kube-system,Attempt:0,}" May 13 10:03:07.777698 containerd[1591]: time="2025-05-13T10:03:07.777172588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-thf6f,Uid:7ce333ae-3ee7-43d5-a75e-02f0517a7db5,Namespace:calico-system,Attempt:0,}" May 13 10:03:07.916060 systemd-networkd[1496]: cali964f0843e23: Gained IPv6LL May 13 10:03:07.929824 kubelet[2858]: E0513 10:03:07.929776 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:07.966435 kubelet[2858]: I0513 10:03:07.966233 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9wnm7" podStartSLOduration=34.966212274 podStartE2EDuration="34.966212274s" podCreationTimestamp="2025-05-13 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:03:07.961265282 +0000 UTC m=+48.297302528" watchObservedRunningTime="2025-05-13 10:03:07.966212274 +0000 UTC m=+48.302249640" May 13 10:03:08.100653 systemd-networkd[1496]: cali9129de4fdb6: Link UP May 13 10:03:08.102692 systemd-networkd[1496]: cali9129de4fdb6: Gained carrier May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:07.970 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0 coredns-7db6d8ff4d- kube-system 8cfe200d-9f73-4fa4-9cc4-8c6ccb628016 755 0 2025-05-13 10:02:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-k2tdz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9129de4fdb6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k2tdz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k2tdz-" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:07.975 [INFO][4464] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k2tdz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.057 [INFO][4516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" HandleID="k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Workload="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.067 [INFO][4516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" HandleID="k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Workload="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d560), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-k2tdz", "timestamp":"2025-05-13 10:03:08.05702927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.068 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.068 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.068 [INFO][4516] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.069 [INFO][4516] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.074 [INFO][4516] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.078 [INFO][4516] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.079 [INFO][4516] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.081 [INFO][4516] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.081 [INFO][4516] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.082 [INFO][4516] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5 May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.086 [INFO][4516] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.093 [INFO][4516] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.093 [INFO][4516] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" host="localhost" May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.093 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:08.120147 containerd[1591]: 2025-05-13 10:03:08.093 [INFO][4516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" HandleID="k8s-pod-network.ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Workload="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" May 13 10:03:08.120816 containerd[1591]: 2025-05-13 10:03:08.097 [INFO][4464] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k2tdz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8cfe200d-9f73-4fa4-9cc4-8c6ccb628016", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-k2tdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9129de4fdb6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:08.120816 containerd[1591]: 2025-05-13 10:03:08.097 [INFO][4464] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k2tdz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" May 13 10:03:08.120816 containerd[1591]: 2025-05-13 10:03:08.097 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9129de4fdb6 ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k2tdz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" May 13 10:03:08.120816 containerd[1591]: 2025-05-13 10:03:08.102 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k2tdz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" May 13 10:03:08.120816 containerd[1591]: 2025-05-13 10:03:08.103 [INFO][4464] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k2tdz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8cfe200d-9f73-4fa4-9cc4-8c6ccb628016", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5", Pod:"coredns-7db6d8ff4d-k2tdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9129de4fdb6", MAC:"16:be:e1:69:ff:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:08.120816 containerd[1591]: 2025-05-13 10:03:08.116 [INFO][4464] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k2tdz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k2tdz-eth0" May 13 10:03:08.291149 systemd-networkd[1496]: cali060ea239815: Link UP May 13 10:03:08.291830 systemd-networkd[1496]: cali060ea239815: Gained carrier May 13 10:03:08.364108 systemd-networkd[1496]: calic179866b548: Gained IPv6LL May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.022 [INFO][4487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--thf6f-eth0 csi-node-driver- calico-system 7ce333ae-3ee7-43d5-a75e-02f0517a7db5 616 0 2025-05-13 10:02:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-thf6f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali060ea239815 [] []}} ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Namespace="calico-system" Pod="csi-node-driver-thf6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--thf6f-" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.022 [INFO][4487] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Namespace="calico-system" Pod="csi-node-driver-thf6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--thf6f-eth0" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.070 [INFO][4525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" HandleID="k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Workload="localhost-k8s-csi--node--driver--thf6f-eth0" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.078 [INFO][4525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" HandleID="k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Workload="localhost-k8s-csi--node--driver--thf6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fc500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-thf6f", "timestamp":"2025-05-13 10:03:08.070481606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.078 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.093 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.093 [INFO][4525] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.096 [INFO][4525] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.100 [INFO][4525] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.107 [INFO][4525] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.110 [INFO][4525] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.116 [INFO][4525] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.116 [INFO][4525] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.118 [INFO][4525] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.128 [INFO][4525] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.286 [INFO][4525] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.286 [INFO][4525] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" host="localhost" May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.286 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:08.372233 containerd[1591]: 2025-05-13 10:03:08.286 [INFO][4525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" HandleID="k8s-pod-network.69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Workload="localhost-k8s-csi--node--driver--thf6f-eth0" May 13 10:03:08.372826 containerd[1591]: 2025-05-13 10:03:08.289 [INFO][4487] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Namespace="calico-system" Pod="csi-node-driver-thf6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--thf6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--thf6f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ce333ae-3ee7-43d5-a75e-02f0517a7db5", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-thf6f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali060ea239815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:08.372826 containerd[1591]: 2025-05-13 10:03:08.289 [INFO][4487] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Namespace="calico-system" Pod="csi-node-driver-thf6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--thf6f-eth0" May 13 10:03:08.372826 containerd[1591]: 2025-05-13 10:03:08.289 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali060ea239815 ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Namespace="calico-system" Pod="csi-node-driver-thf6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--thf6f-eth0" May 13 10:03:08.372826 containerd[1591]: 2025-05-13 10:03:08.291 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Namespace="calico-system" Pod="csi-node-driver-thf6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--thf6f-eth0" May 13 10:03:08.372826 containerd[1591]: 2025-05-13 10:03:08.291 [INFO][4487] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Namespace="calico-system" Pod="csi-node-driver-thf6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--thf6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--thf6f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ce333ae-3ee7-43d5-a75e-02f0517a7db5", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb", Pod:"csi-node-driver-thf6f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali060ea239815", MAC:"9e:4d:25:b4:55:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:08.372826 containerd[1591]: 2025-05-13 10:03:08.367 [INFO][4487] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" Namespace="calico-system" Pod="csi-node-driver-thf6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--thf6f-eth0" May 13 10:03:08.435591 containerd[1591]: time="2025-05-13T10:03:08.433910576Z" level=info msg="connecting to shim ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5" address="unix:///run/containerd/s/554f4b95e34fc9e95ed9cee8f4e60f6e5bf9a7fd07669760811cb811feba87f4" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:08.461037 containerd[1591]: time="2025-05-13T10:03:08.460675707Z" level=info msg="connecting to shim 69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb" address="unix:///run/containerd/s/5e22172e721e17bee064089d6b99f8fec6e2eb7a37f123acf02eddbd4d156573" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:08.466106 systemd[1]: Started cri-containerd-ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5.scope - libcontainer container ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5. May 13 10:03:08.479980 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:08.492059 systemd[1]: Started cri-containerd-69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb.scope - libcontainer container 69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb. May 13 10:03:08.509639 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:08.516829 containerd[1591]: time="2025-05-13T10:03:08.516776927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k2tdz,Uid:8cfe200d-9f73-4fa4-9cc4-8c6ccb628016,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5\"" May 13 10:03:08.517787 kubelet[2858]: E0513 10:03:08.517746 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:08.520508 containerd[1591]: time="2025-05-13T10:03:08.520479381Z" level=info msg="CreateContainer within sandbox \"ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 10:03:08.531642 containerd[1591]: time="2025-05-13T10:03:08.531589689Z" level=info msg="Container 7447c965d2b2445684b6775f3ccb3ec20e1a794d407f8d98c134ab923353334d: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:08.532294 containerd[1591]: time="2025-05-13T10:03:08.532265028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-thf6f,Uid:7ce333ae-3ee7-43d5-a75e-02f0517a7db5,Namespace:calico-system,Attempt:0,} returns sandbox id \"69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb\"" May 13 10:03:08.541527 containerd[1591]: time="2025-05-13T10:03:08.541483321Z" level=info msg="CreateContainer within sandbox \"ae41f54fd0f3c9c00e1d9011d14c4711cf958d9ab99b058e5e8ddd6f6032b4f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7447c965d2b2445684b6775f3ccb3ec20e1a794d407f8d98c134ab923353334d\"" May 13 10:03:08.542181 containerd[1591]: time="2025-05-13T10:03:08.542158079Z" level=info msg="StartContainer for \"7447c965d2b2445684b6775f3ccb3ec20e1a794d407f8d98c134ab923353334d\"" May 13 10:03:08.543031 containerd[1591]: time="2025-05-13T10:03:08.542980524Z" level=info msg="connecting to shim 7447c965d2b2445684b6775f3ccb3ec20e1a794d407f8d98c134ab923353334d" address="unix:///run/containerd/s/554f4b95e34fc9e95ed9cee8f4e60f6e5bf9a7fd07669760811cb811feba87f4" protocol=ttrpc version=3 May 13 10:03:08.568061 systemd[1]: Started cri-containerd-7447c965d2b2445684b6775f3ccb3ec20e1a794d407f8d98c134ab923353334d.scope - libcontainer container 7447c965d2b2445684b6775f3ccb3ec20e1a794d407f8d98c134ab923353334d. May 13 10:03:08.603680 containerd[1591]: time="2025-05-13T10:03:08.603629455Z" level=info msg="StartContainer for \"7447c965d2b2445684b6775f3ccb3ec20e1a794d407f8d98c134ab923353334d\" returns successfully" May 13 10:03:08.776551 containerd[1591]: time="2025-05-13T10:03:08.776479125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7597b9c9d9-dv8bp,Uid:115eb9a3-80d8-4fef-9662-417de41e8958,Namespace:calico-apiserver,Attempt:0,}" May 13 10:03:08.877161 systemd-networkd[1496]: cali769c601b033: Link UP May 13 10:03:08.877402 systemd-networkd[1496]: cali769c601b033: Gained carrier May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.810 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0 calico-apiserver-7597b9c9d9- calico-apiserver 115eb9a3-80d8-4fef-9662-417de41e8958 754 0 2025-05-13 10:02:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7597b9c9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7597b9c9d9-dv8bp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali769c601b033 [] []}} ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-dv8bp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.810 [INFO][4696] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-dv8bp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.837 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" HandleID="k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Workload="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.846 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" HandleID="k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Workload="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f5a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7597b9c9d9-dv8bp", "timestamp":"2025-05-13 10:03:08.837650729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.846 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.846 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.846 [INFO][4712] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.848 [INFO][4712] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.852 [INFO][4712] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.856 [INFO][4712] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.858 [INFO][4712] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.860 [INFO][4712] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.860 [INFO][4712] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.862 [INFO][4712] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1 May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.865 [INFO][4712] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.871 [INFO][4712] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.871 [INFO][4712] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" host="localhost" May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.871 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:08.893000 containerd[1591]: 2025-05-13 10:03:08.871 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" HandleID="k8s-pod-network.b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Workload="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" May 13 10:03:08.893533 containerd[1591]: 2025-05-13 10:03:08.874 [INFO][4696] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-dv8bp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0", GenerateName:"calico-apiserver-7597b9c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"115eb9a3-80d8-4fef-9662-417de41e8958", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7597b9c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7597b9c9d9-dv8bp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali769c601b033", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:08.893533 containerd[1591]: 2025-05-13 10:03:08.874 [INFO][4696] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-dv8bp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" May 13 10:03:08.893533 containerd[1591]: 2025-05-13 10:03:08.874 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali769c601b033 ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-dv8bp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" May 13 10:03:08.893533 containerd[1591]: 2025-05-13 10:03:08.876 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-dv8bp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" May 13 10:03:08.893533 containerd[1591]: 2025-05-13 10:03:08.878 [INFO][4696] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-dv8bp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0", GenerateName:"calico-apiserver-7597b9c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"115eb9a3-80d8-4fef-9662-417de41e8958", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7597b9c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1", Pod:"calico-apiserver-7597b9c9d9-dv8bp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali769c601b033", MAC:"46:6e:55:a2:c6:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:08.893533 containerd[1591]: 2025-05-13 10:03:08.887 [INFO][4696] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-7597b9c9d9-dv8bp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7597b9c9d9--dv8bp-eth0" May 13 10:03:08.922521 containerd[1591]: time="2025-05-13T10:03:08.922475439Z" level=info msg="connecting to shim b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1" address="unix:///run/containerd/s/f2e8ba40ed15fc5c6139ec5962c734173adfe812191fc53c161060ab0b502580" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:08.937029 kubelet[2858]: E0513 10:03:08.936931 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:08.937390 kubelet[2858]: E0513 10:03:08.937164 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:08.940569 systemd-networkd[1496]: vxlan.calico: Gained IPv6LL May 13 10:03:08.949239 kubelet[2858]: I0513 10:03:08.949190 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k2tdz" podStartSLOduration=35.94917147 podStartE2EDuration="35.94917147s" podCreationTimestamp="2025-05-13 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:03:08.948675569 +0000 UTC m=+49.284712815" watchObservedRunningTime="2025-05-13 10:03:08.94917147 +0000 UTC m=+49.285208716" May 13 10:03:08.952087 systemd[1]: Started cri-containerd-b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1.scope - libcontainer container b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1. May 13 10:03:08.968403 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:08.998356 containerd[1591]: time="2025-05-13T10:03:08.998308430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7597b9c9d9-dv8bp,Uid:115eb9a3-80d8-4fef-9662-417de41e8958,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1\"" May 13 10:03:09.388446 systemd-networkd[1496]: cali9129de4fdb6: Gained IPv6LL May 13 10:03:09.452697 systemd-networkd[1496]: cali060ea239815: Gained IPv6LL May 13 10:03:09.940195 kubelet[2858]: E0513 10:03:09.939814 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:10.538196 containerd[1591]: time="2025-05-13T10:03:10.538129681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:10.538928 containerd[1591]: time="2025-05-13T10:03:10.538862737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 10:03:10.540178 systemd-networkd[1496]: cali769c601b033: Gained IPv6LL May 13 10:03:10.541052 containerd[1591]: time="2025-05-13T10:03:10.540083319Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:10.542708 containerd[1591]: time="2025-05-13T10:03:10.542669768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:10.543272 containerd[1591]: time="2025-05-13T10:03:10.543246741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.030947432s" May 13 10:03:10.543325 containerd[1591]: time="2025-05-13T10:03:10.543276076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 10:03:10.544610 containerd[1591]: time="2025-05-13T10:03:10.544581828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 10:03:10.545898 containerd[1591]: time="2025-05-13T10:03:10.545841644Z" level=info msg="CreateContainer within sandbox \"d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 10:03:10.556304 containerd[1591]: time="2025-05-13T10:03:10.556228814Z" level=info msg="Container 3e3940fa405e425b2b60de9088e7e9cf9723f49f570aa9eb92394c4c4ac338b7: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:10.568458 containerd[1591]: time="2025-05-13T10:03:10.568401235Z" level=info msg="CreateContainer within sandbox \"d78ce2aad4f14822845791df7c3ec29d039cd5bae759d5d15156653f367aea9f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e3940fa405e425b2b60de9088e7e9cf9723f49f570aa9eb92394c4c4ac338b7\"" May 13 10:03:10.569147 containerd[1591]: time="2025-05-13T10:03:10.569093986Z" level=info msg="StartContainer for \"3e3940fa405e425b2b60de9088e7e9cf9723f49f570aa9eb92394c4c4ac338b7\"" May 13 10:03:10.570413 containerd[1591]: time="2025-05-13T10:03:10.570382917Z" level=info msg="connecting to shim 3e3940fa405e425b2b60de9088e7e9cf9723f49f570aa9eb92394c4c4ac338b7" address="unix:///run/containerd/s/fe04b13fb1724b122f8d3f34a3bd29df500109561f74175e0b8ce05547ad9789" protocol=ttrpc version=3 May 13 10:03:10.602233 systemd[1]: Started cri-containerd-3e3940fa405e425b2b60de9088e7e9cf9723f49f570aa9eb92394c4c4ac338b7.scope - libcontainer container 3e3940fa405e425b2b60de9088e7e9cf9723f49f570aa9eb92394c4c4ac338b7. May 13 10:03:10.716789 containerd[1591]: time="2025-05-13T10:03:10.716732033Z" level=info msg="StartContainer for \"3e3940fa405e425b2b60de9088e7e9cf9723f49f570aa9eb92394c4c4ac338b7\" returns successfully" May 13 10:03:10.943505 kubelet[2858]: E0513 10:03:10.943413 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:11.043292 containerd[1591]: time="2025-05-13T10:03:11.042464716Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e\" id:\"7d46c918c246205bd053fa26f8b6b63d29cd9516149a54854388f940491e04a5\" pid:4842 exited_at:{seconds:1747130591 nanos:42111624}" May 13 10:03:11.055186 kubelet[2858]: E0513 10:03:11.055136 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:11.335490 kubelet[2858]: I0513 10:03:11.335126 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7597b9c9d9-8lxcx" podStartSLOduration=28.302872454 podStartE2EDuration="32.335108357s" podCreationTimestamp="2025-05-13 10:02:39 +0000 UTC" firstStartedPulling="2025-05-13 10:03:06.512064708 +0000 UTC m=+46.848101954" lastFinishedPulling="2025-05-13 10:03:10.544300611 +0000 UTC m=+50.880337857" observedRunningTime="2025-05-13 10:03:11.047688267 +0000 UTC m=+51.383725513" watchObservedRunningTime="2025-05-13 10:03:11.335108357 +0000 UTC m=+51.671145603" May 13 10:03:11.958219 kubelet[2858]: E0513 10:03:11.958176 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:12.430193 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:37310.service - OpenSSH per-connection server daemon (10.0.0.1:37310). May 13 10:03:12.496647 sshd[4858]: Accepted publickey for core from 10.0.0.1 port 37310 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:12.498780 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:12.505958 systemd-logind[1575]: New session 12 of user core. May 13 10:03:12.513149 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 10:03:12.637412 sshd[4861]: Connection closed by 10.0.0.1 port 37310 May 13 10:03:12.637957 sshd-session[4858]: pam_unix(sshd:session): session closed for user core May 13 10:03:12.653496 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:37310.service: Deactivated successfully. May 13 10:03:12.655855 systemd[1]: session-12.scope: Deactivated successfully. May 13 10:03:12.656796 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. May 13 10:03:12.660447 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:37322.service - OpenSSH per-connection server daemon (10.0.0.1:37322). May 13 10:03:12.661527 systemd-logind[1575]: Removed session 12. May 13 10:03:12.717609 sshd[4877]: Accepted publickey for core from 10.0.0.1 port 37322 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:12.719509 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:12.724717 systemd-logind[1575]: New session 13 of user core. May 13 10:03:12.734197 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 10:03:13.071325 sshd[4879]: Connection closed by 10.0.0.1 port 37322 May 13 10:03:13.086451 sshd-session[4877]: pam_unix(sshd:session): session closed for user core May 13 10:03:13.091211 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:37328.service - OpenSSH per-connection server daemon (10.0.0.1:37328). May 13 10:03:13.092037 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:37322.service: Deactivated successfully. May 13 10:03:13.094478 systemd[1]: session-13.scope: Deactivated successfully. May 13 10:03:13.095505 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. May 13 10:03:13.098239 systemd-logind[1575]: Removed session 13. May 13 10:03:13.144816 sshd[4894]: Accepted publickey for core from 10.0.0.1 port 37328 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:13.146553 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:13.151729 systemd-logind[1575]: New session 14 of user core. May 13 10:03:13.166143 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 10:03:13.472729 sshd[4899]: Connection closed by 10.0.0.1 port 37328 May 13 10:03:13.473109 sshd-session[4894]: pam_unix(sshd:session): session closed for user core May 13 10:03:13.478156 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:37328.service: Deactivated successfully. May 13 10:03:13.480944 systemd[1]: session-14.scope: Deactivated successfully. May 13 10:03:13.482176 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. May 13 10:03:13.483628 systemd-logind[1575]: Removed session 14. May 13 10:03:15.698787 containerd[1591]: time="2025-05-13T10:03:15.698714274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:15.792540 containerd[1591]: time="2025-05-13T10:03:15.792489635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 10:03:15.832413 containerd[1591]: time="2025-05-13T10:03:15.832365012Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:15.870018 containerd[1591]: time="2025-05-13T10:03:15.869979141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:15.870538 containerd[1591]: time="2025-05-13T10:03:15.870490561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 5.325876433s" May 13 10:03:15.870538 containerd[1591]: time="2025-05-13T10:03:15.870532139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 10:03:15.871560 containerd[1591]: time="2025-05-13T10:03:15.871502993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 10:03:15.880853 containerd[1591]: time="2025-05-13T10:03:15.880804631Z" level=info msg="CreateContainer within sandbox \"227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 10:03:16.216165 containerd[1591]: time="2025-05-13T10:03:16.216120934Z" level=info msg="Container 45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:16.643526 containerd[1591]: time="2025-05-13T10:03:16.643401171Z" level=info msg="CreateContainer within sandbox \"227ed828d98b76677103b398c4781d83d2c2b8a035c29e49daacb7e2d1fed293\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5\"" May 13 10:03:16.644140 containerd[1591]: time="2025-05-13T10:03:16.644087551Z" level=info msg="StartContainer for \"45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5\"" May 13 10:03:16.645538 containerd[1591]: time="2025-05-13T10:03:16.645502157Z" level=info msg="connecting to shim 45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5" address="unix:///run/containerd/s/24d40eadfbefafde50f75e5a60f2e74fc1c7b9c2e207d730eb00f7d38577606f" protocol=ttrpc version=3 May 13 10:03:16.672013 systemd[1]: Started cri-containerd-45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5.scope - libcontainer container 45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5. May 13 10:03:16.724129 containerd[1591]: time="2025-05-13T10:03:16.724074846Z" level=info msg="StartContainer for \"45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5\" returns successfully" May 13 10:03:17.030729 containerd[1591]: time="2025-05-13T10:03:17.030669707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5\" id:\"236162b7ecfef16dd08dc8dc556dad54993231b4b0f455617c5b6dc01ddc3041\" pid:4966 exited_at:{seconds:1747130597 nanos:30132738}" May 13 10:03:17.053087 kubelet[2858]: I0513 10:03:17.053008 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-667f6586cc-nnvqm" podStartSLOduration=29.837468793 podStartE2EDuration="38.052987469s" podCreationTimestamp="2025-05-13 10:02:39 +0000 UTC" firstStartedPulling="2025-05-13 10:03:07.655752992 +0000 UTC m=+47.991790238" lastFinishedPulling="2025-05-13 10:03:15.871271668 +0000 UTC m=+56.207308914" observedRunningTime="2025-05-13 10:03:17.003075296 +0000 UTC m=+57.339112542" watchObservedRunningTime="2025-05-13 10:03:17.052987469 +0000 UTC m=+57.389024715" May 13 10:03:18.493033 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:35120.service - OpenSSH per-connection server daemon (10.0.0.1:35120). May 13 10:03:18.560545 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 35120 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:18.562547 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:18.567382 systemd-logind[1575]: New session 15 of user core. May 13 10:03:18.578014 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 10:03:18.739974 containerd[1591]: time="2025-05-13T10:03:18.739901414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:18.740966 containerd[1591]: time="2025-05-13T10:03:18.740860414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 10:03:18.741127 sshd[4986]: Connection closed by 10.0.0.1 port 35120 May 13 10:03:18.741360 sshd-session[4984]: pam_unix(sshd:session): session closed for user core May 13 10:03:18.745279 containerd[1591]: time="2025-05-13T10:03:18.744949734Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:18.746331 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:35120.service: Deactivated successfully. May 13 10:03:18.748435 systemd[1]: session-15.scope: Deactivated successfully. May 13 10:03:18.749316 containerd[1591]: time="2025-05-13T10:03:18.749270438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:18.749564 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. May 13 10:03:18.750143 containerd[1591]: time="2025-05-13T10:03:18.749791026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.878258978s" May 13 10:03:18.750143 containerd[1591]: time="2025-05-13T10:03:18.749829959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 10:03:18.751351 systemd-logind[1575]: Removed session 15. May 13 10:03:18.751576 containerd[1591]: time="2025-05-13T10:03:18.751547364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 10:03:18.753303 containerd[1591]: time="2025-05-13T10:03:18.753267795Z" level=info msg="CreateContainer within sandbox \"69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 10:03:18.781179 containerd[1591]: time="2025-05-13T10:03:18.781115090Z" level=info msg="Container 24eec471727f0fb36055b88927f134c401c1cb5b0d8eeb4841954ea2a39ae9dd: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:18.822157 containerd[1591]: time="2025-05-13T10:03:18.822095159Z" level=info msg="CreateContainer within sandbox \"69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"24eec471727f0fb36055b88927f134c401c1cb5b0d8eeb4841954ea2a39ae9dd\"" May 13 10:03:18.822807 containerd[1591]: time="2025-05-13T10:03:18.822777380Z" level=info msg="StartContainer for \"24eec471727f0fb36055b88927f134c401c1cb5b0d8eeb4841954ea2a39ae9dd\"" May 13 10:03:18.824645 containerd[1591]: time="2025-05-13T10:03:18.824594031Z" level=info msg="connecting to shim 24eec471727f0fb36055b88927f134c401c1cb5b0d8eeb4841954ea2a39ae9dd" address="unix:///run/containerd/s/5e22172e721e17bee064089d6b99f8fec6e2eb7a37f123acf02eddbd4d156573" protocol=ttrpc version=3 May 13 10:03:18.854036 systemd[1]: Started cri-containerd-24eec471727f0fb36055b88927f134c401c1cb5b0d8eeb4841954ea2a39ae9dd.scope - libcontainer container 24eec471727f0fb36055b88927f134c401c1cb5b0d8eeb4841954ea2a39ae9dd. May 13 10:03:19.235683 containerd[1591]: time="2025-05-13T10:03:19.235606256Z" level=info msg="StartContainer for \"24eec471727f0fb36055b88927f134c401c1cb5b0d8eeb4841954ea2a39ae9dd\" returns successfully" May 13 10:03:19.292026 containerd[1591]: time="2025-05-13T10:03:19.291937683Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:19.293075 containerd[1591]: time="2025-05-13T10:03:19.293016599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 10:03:19.295042 containerd[1591]: time="2025-05-13T10:03:19.294987119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 543.412254ms" May 13 10:03:19.295042 containerd[1591]: time="2025-05-13T10:03:19.295036312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 10:03:19.296013 containerd[1591]: time="2025-05-13T10:03:19.295978200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 10:03:19.297796 containerd[1591]: time="2025-05-13T10:03:19.297740129Z" level=info msg="CreateContainer within sandbox \"b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 10:03:19.308477 containerd[1591]: time="2025-05-13T10:03:19.308436807Z" level=info msg="Container 87e9065d193a9149e34643b5a376d1658f11ef5b11a635752eb3bbccb002f11e: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:19.318281 containerd[1591]: time="2025-05-13T10:03:19.318226121Z" level=info msg="CreateContainer within sandbox \"b589fb6f3954d9db79a4110215645ccec090f1a258074b82c05dbf5827f7ddf1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"87e9065d193a9149e34643b5a376d1658f11ef5b11a635752eb3bbccb002f11e\"" May 13 10:03:19.318862 containerd[1591]: time="2025-05-13T10:03:19.318808094Z" level=info msg="StartContainer for \"87e9065d193a9149e34643b5a376d1658f11ef5b11a635752eb3bbccb002f11e\"" May 13 10:03:19.320005 containerd[1591]: time="2025-05-13T10:03:19.319939267Z" level=info msg="connecting to shim 87e9065d193a9149e34643b5a376d1658f11ef5b11a635752eb3bbccb002f11e" address="unix:///run/containerd/s/f2e8ba40ed15fc5c6139ec5962c734173adfe812191fc53c161060ab0b502580" protocol=ttrpc version=3 May 13 10:03:19.349084 systemd[1]: Started cri-containerd-87e9065d193a9149e34643b5a376d1658f11ef5b11a635752eb3bbccb002f11e.scope - libcontainer container 87e9065d193a9149e34643b5a376d1658f11ef5b11a635752eb3bbccb002f11e. May 13 10:03:19.402792 containerd[1591]: time="2025-05-13T10:03:19.402744390Z" level=info msg="StartContainer for \"87e9065d193a9149e34643b5a376d1658f11ef5b11a635752eb3bbccb002f11e\" returns successfully" May 13 10:03:20.297571 kubelet[2858]: I0513 10:03:20.297488 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7597b9c9d9-dv8bp" podStartSLOduration=31.001072494 podStartE2EDuration="41.297439138s" podCreationTimestamp="2025-05-13 10:02:39 +0000 UTC" firstStartedPulling="2025-05-13 10:03:08.999475241 +0000 UTC m=+49.335512487" lastFinishedPulling="2025-05-13 10:03:19.295841865 +0000 UTC m=+59.631879131" observedRunningTime="2025-05-13 10:03:20.296569735 +0000 UTC m=+60.632606981" watchObservedRunningTime="2025-05-13 10:03:20.297439138 +0000 UTC m=+60.633476384" May 13 10:03:21.774380 containerd[1591]: time="2025-05-13T10:03:21.774303387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:21.775287 containerd[1591]: time="2025-05-13T10:03:21.775242552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 10:03:21.776666 containerd[1591]: time="2025-05-13T10:03:21.776622131Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:21.779275 containerd[1591]: time="2025-05-13T10:03:21.779212981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:21.779848 containerd[1591]: time="2025-05-13T10:03:21.779802501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.483789044s" May 13 10:03:21.779848 containerd[1591]: time="2025-05-13T10:03:21.779841918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 10:03:21.781749 containerd[1591]: time="2025-05-13T10:03:21.781704679Z" level=info msg="CreateContainer within sandbox \"69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 10:03:21.790484 containerd[1591]: time="2025-05-13T10:03:21.790435324Z" level=info msg="Container dfb2fa8f7241445d9bda0b519157acea3dfeded40fe66a171f67102ea44b853c: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:21.800560 containerd[1591]: time="2025-05-13T10:03:21.800519987Z" level=info msg="CreateContainer within sandbox \"69cb218af41d97f5ebd9d8565c45751fe2b0475fdda80de1b5e804eff6c561cb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dfb2fa8f7241445d9bda0b519157acea3dfeded40fe66a171f67102ea44b853c\"" May 13 10:03:21.801285 containerd[1591]: time="2025-05-13T10:03:21.801140817Z" level=info msg="StartContainer for \"dfb2fa8f7241445d9bda0b519157acea3dfeded40fe66a171f67102ea44b853c\"" May 13 10:03:21.803087 containerd[1591]: time="2025-05-13T10:03:21.803052093Z" level=info msg="connecting to shim dfb2fa8f7241445d9bda0b519157acea3dfeded40fe66a171f67102ea44b853c" address="unix:///run/containerd/s/5e22172e721e17bee064089d6b99f8fec6e2eb7a37f123acf02eddbd4d156573" protocol=ttrpc version=3 May 13 10:03:21.872062 systemd[1]: Started cri-containerd-dfb2fa8f7241445d9bda0b519157acea3dfeded40fe66a171f67102ea44b853c.scope - libcontainer container dfb2fa8f7241445d9bda0b519157acea3dfeded40fe66a171f67102ea44b853c. May 13 10:03:21.921619 containerd[1591]: time="2025-05-13T10:03:21.921564204Z" level=info msg="StartContainer for \"dfb2fa8f7241445d9bda0b519157acea3dfeded40fe66a171f67102ea44b853c\" returns successfully" May 13 10:03:22.260778 kubelet[2858]: I0513 10:03:22.260681 2858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-thf6f" podStartSLOduration=30.015192627 podStartE2EDuration="43.260663461s" podCreationTimestamp="2025-05-13 10:02:39 +0000 UTC" firstStartedPulling="2025-05-13 10:03:08.534915366 +0000 UTC m=+48.870952602" lastFinishedPulling="2025-05-13 10:03:21.78038619 +0000 UTC m=+62.116423436" observedRunningTime="2025-05-13 10:03:22.260071727 +0000 UTC m=+62.596108983" watchObservedRunningTime="2025-05-13 10:03:22.260663461 +0000 UTC m=+62.596700707" May 13 10:03:22.878225 kubelet[2858]: I0513 10:03:22.878183 2858 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 10:03:22.878225 kubelet[2858]: I0513 10:03:22.878222 2858 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 10:03:23.753973 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:33780.service - OpenSSH per-connection server daemon (10.0.0.1:33780). May 13 10:03:23.811769 sshd[5119]: Accepted publickey for core from 10.0.0.1 port 33780 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:23.813357 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:23.817750 systemd-logind[1575]: New session 16 of user core. May 13 10:03:23.829011 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 10:03:23.958619 sshd[5121]: Connection closed by 10.0.0.1 port 33780 May 13 10:03:23.959365 sshd-session[5119]: pam_unix(sshd:session): session closed for user core May 13 10:03:23.965207 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:33780.service: Deactivated successfully. May 13 10:03:23.968178 systemd[1]: session-16.scope: Deactivated successfully. May 13 10:03:23.969473 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. May 13 10:03:23.971541 systemd-logind[1575]: Removed session 16. May 13 10:03:24.474846 containerd[1591]: time="2025-05-13T10:03:24.474783135Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5\" id:\"e25b0605d45edb7ef8b25fdeacd415e8c297e3bacdb108ae4b98ea0dd45a2ee1\" pid:5146 exited_at:{seconds:1747130604 nanos:474449211}" May 13 10:03:28.776738 kubelet[2858]: E0513 10:03:28.776659 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:28.978106 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:33790.service - OpenSSH per-connection server daemon (10.0.0.1:33790). May 13 10:03:29.035262 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 33790 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:29.036895 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:29.041480 systemd-logind[1575]: New session 17 of user core. May 13 10:03:29.056027 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 10:03:29.165477 sshd[5167]: Connection closed by 10.0.0.1 port 33790 May 13 10:03:29.165846 sshd-session[5165]: pam_unix(sshd:session): session closed for user core May 13 10:03:29.170334 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:33790.service: Deactivated successfully. May 13 10:03:29.172429 systemd[1]: session-17.scope: Deactivated successfully. May 13 10:03:29.173294 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. May 13 10:03:29.174602 systemd-logind[1575]: Removed session 17. May 13 10:03:34.180601 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:45136.service - OpenSSH per-connection server daemon (10.0.0.1:45136). May 13 10:03:34.251711 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 45136 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:34.253813 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:34.258961 systemd-logind[1575]: New session 18 of user core. May 13 10:03:34.271006 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 10:03:34.392080 sshd[5182]: Connection closed by 10.0.0.1 port 45136 May 13 10:03:34.392411 sshd-session[5180]: pam_unix(sshd:session): session closed for user core May 13 10:03:34.402768 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:45136.service: Deactivated successfully. May 13 10:03:34.404816 systemd[1]: session-18.scope: Deactivated successfully. May 13 10:03:34.406159 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. May 13 10:03:34.409611 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:45138.service - OpenSSH per-connection server daemon (10.0.0.1:45138). May 13 10:03:34.410856 systemd-logind[1575]: Removed session 18. May 13 10:03:34.461573 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 45138 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:34.463332 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:34.468774 systemd-logind[1575]: New session 19 of user core. May 13 10:03:34.481163 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 10:03:34.756331 sshd[5199]: Connection closed by 10.0.0.1 port 45138 May 13 10:03:34.756673 sshd-session[5195]: pam_unix(sshd:session): session closed for user core May 13 10:03:34.765631 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:45138.service: Deactivated successfully. May 13 10:03:34.767827 systemd[1]: session-19.scope: Deactivated successfully. May 13 10:03:34.768610 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. May 13 10:03:34.772206 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:45146.service - OpenSSH per-connection server daemon (10.0.0.1:45146). May 13 10:03:34.773167 systemd-logind[1575]: Removed session 19. May 13 10:03:34.828632 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 45146 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:34.830798 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:34.838828 systemd-logind[1575]: New session 20 of user core. May 13 10:03:34.845091 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 10:03:36.217979 sshd[5212]: Connection closed by 10.0.0.1 port 45146 May 13 10:03:36.219188 sshd-session[5210]: pam_unix(sshd:session): session closed for user core May 13 10:03:36.231644 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:45146.service: Deactivated successfully. May 13 10:03:36.235753 systemd[1]: session-20.scope: Deactivated successfully. May 13 10:03:36.236132 systemd[1]: session-20.scope: Consumed 590ms CPU time, 68.8M memory peak. May 13 10:03:36.238116 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. May 13 10:03:36.241613 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:45162.service - OpenSSH per-connection server daemon (10.0.0.1:45162). May 13 10:03:36.244446 systemd-logind[1575]: Removed session 20. May 13 10:03:36.295857 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 45162 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:36.297827 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:36.303503 systemd-logind[1575]: New session 21 of user core. May 13 10:03:36.311049 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 10:03:36.542611 sshd[5236]: Connection closed by 10.0.0.1 port 45162 May 13 10:03:36.543052 sshd-session[5234]: pam_unix(sshd:session): session closed for user core May 13 10:03:36.555011 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:45162.service: Deactivated successfully. May 13 10:03:36.557585 systemd[1]: session-21.scope: Deactivated successfully. May 13 10:03:36.558663 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. May 13 10:03:36.563377 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:45178.service - OpenSSH per-connection server daemon (10.0.0.1:45178). May 13 10:03:36.564247 systemd-logind[1575]: Removed session 21. May 13 10:03:36.620831 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 45178 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:36.622715 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:36.627641 systemd-logind[1575]: New session 22 of user core. May 13 10:03:36.634028 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 10:03:36.744913 sshd[5250]: Connection closed by 10.0.0.1 port 45178 May 13 10:03:36.745266 sshd-session[5248]: pam_unix(sshd:session): session closed for user core May 13 10:03:36.750394 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:45178.service: Deactivated successfully. May 13 10:03:36.752523 systemd[1]: session-22.scope: Deactivated successfully. May 13 10:03:36.753327 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. May 13 10:03:36.754590 systemd-logind[1575]: Removed session 22. May 13 10:03:37.776354 kubelet[2858]: E0513 10:03:37.776306 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:41.047074 containerd[1591]: time="2025-05-13T10:03:41.047015745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d3a557b808d3466450717aefb39f02f26283b07d5730fcaac473d47b5d9717e\" id:\"8aba2144a753c59da9b09cc620df44565630d58cda55f55ba82285f9e6b0f9ad\" pid:5275 exited_at:{seconds:1747130621 nanos:46625719}" May 13 10:03:41.762047 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:45182.service - OpenSSH per-connection server daemon (10.0.0.1:45182). May 13 10:03:41.838455 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 45182 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:41.840812 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:41.847661 systemd-logind[1575]: New session 23 of user core. May 13 10:03:41.857192 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 10:03:42.000823 sshd[5291]: Connection closed by 10.0.0.1 port 45182 May 13 10:03:42.001249 sshd-session[5289]: pam_unix(sshd:session): session closed for user core May 13 10:03:42.006576 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:45182.service: Deactivated successfully. May 13 10:03:42.009365 systemd[1]: session-23.scope: Deactivated successfully. May 13 10:03:42.010398 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. May 13 10:03:42.012072 systemd-logind[1575]: Removed session 23. May 13 10:03:47.021981 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:53864.service - OpenSSH per-connection server daemon (10.0.0.1:53864). May 13 10:03:47.075337 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 53864 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:47.077151 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:47.081769 systemd-logind[1575]: New session 24 of user core. May 13 10:03:47.089032 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 10:03:47.203127 sshd[5309]: Connection closed by 10.0.0.1 port 53864 May 13 10:03:47.203445 sshd-session[5307]: pam_unix(sshd:session): session closed for user core May 13 10:03:47.207673 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:53864.service: Deactivated successfully. May 13 10:03:47.209928 systemd[1]: session-24.scope: Deactivated successfully. May 13 10:03:47.210785 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. May 13 10:03:47.212369 systemd-logind[1575]: Removed session 24. May 13 10:03:48.776359 kubelet[2858]: E0513 10:03:48.776297 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:52.226246 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:53872.service - OpenSSH per-connection server daemon (10.0.0.1:53872). May 13 10:03:52.295518 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 53872 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:52.297572 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:52.302901 systemd-logind[1575]: New session 25 of user core. May 13 10:03:52.314102 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 10:03:52.436008 sshd[5332]: Connection closed by 10.0.0.1 port 53872 May 13 10:03:52.436388 sshd-session[5330]: pam_unix(sshd:session): session closed for user core May 13 10:03:52.441499 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:53872.service: Deactivated successfully. May 13 10:03:52.444235 systemd[1]: session-25.scope: Deactivated successfully. May 13 10:03:52.445208 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. May 13 10:03:52.446839 systemd-logind[1575]: Removed session 25. May 13 10:03:54.472372 containerd[1591]: time="2025-05-13T10:03:54.472294593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45527d4acd240b0b441560242243483d6ca744540cc84cc7c492e8357f9678b5\" id:\"f94311913eba7472c6dab3b4647b944806a50c67bd052935aa5cb8b4e2d898e5\" pid:5355 exited_at:{seconds:1747130634 nanos:471740690}" May 13 10:03:54.776925 kubelet[2858]: E0513 10:03:54.776743 2858 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:57.452781 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:34586.service - OpenSSH per-connection server daemon (10.0.0.1:34586). May 13 10:03:57.555795 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 34586 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:57.557818 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:57.563124 systemd-logind[1575]: New session 26 of user core. May 13 10:03:57.573044 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 10:03:57.688634 sshd[5368]: Connection closed by 10.0.0.1 port 34586 May 13 10:03:57.688988 sshd-session[5366]: pam_unix(sshd:session): session closed for user core May 13 10:03:57.693743 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:34586.service: Deactivated successfully. May 13 10:03:57.696135 systemd[1]: session-26.scope: Deactivated successfully. May 13 10:03:57.697006 systemd-logind[1575]: Session 26 logged out. Waiting for processes to exit. May 13 10:03:57.698453 systemd-logind[1575]: Removed session 26.