May 14 23:40:29.955880 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:09:34 -00 2025 May 14 23:40:29.955915 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 14 23:40:29.955931 kernel: BIOS-provided physical RAM map: May 14 23:40:29.955939 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 23:40:29.955948 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 23:40:29.955956 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 23:40:29.955966 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 14 23:40:29.955975 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 14 23:40:29.955984 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 14 23:40:29.955993 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 14 23:40:29.956005 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 23:40:29.956014 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 23:40:29.956029 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 23:40:29.956051 kernel: NX (Execute Disable) protection: active May 14 23:40:29.956062 kernel: APIC: Static calls initialized May 14 23:40:29.956080 kernel: SMBIOS 2.8 present. May 14 23:40:29.956090 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 14 23:40:29.956099 kernel: Hypervisor detected: KVM May 14 23:40:29.956109 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 23:40:29.956119 kernel: kvm-clock: using sched offset of 3488699713 cycles May 14 23:40:29.956129 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 23:40:29.956139 kernel: tsc: Detected 2794.746 MHz processor May 14 23:40:29.956149 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 23:40:29.956159 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 23:40:29.956169 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 14 23:40:29.956183 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 23:40:29.956193 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 23:40:29.956202 kernel: Using GB pages for direct mapping May 14 23:40:29.956212 kernel: ACPI: Early table checksum verification disabled May 14 23:40:29.956222 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 14 23:40:29.956231 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:40:29.956241 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:40:29.956250 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:40:29.956260 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 14 23:40:29.956272 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:40:29.956282 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:40:29.956291 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:40:29.956301 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:40:29.956310 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 14 23:40:29.956320 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 14 23:40:29.956336 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 14 23:40:29.956348 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 14 23:40:29.956358 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 14 23:40:29.956369 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 14 23:40:29.956379 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 14 23:40:29.956389 kernel: No NUMA configuration found May 14 23:40:29.956399 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 14 23:40:29.956410 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 14 23:40:29.956424 kernel: Zone ranges: May 14 23:40:29.956434 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 23:40:29.956444 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 14 23:40:29.956454 kernel: Normal empty May 14 23:40:29.956465 kernel: Movable zone start for each node May 14 23:40:29.956475 kernel: Early memory node ranges May 14 23:40:29.956504 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 23:40:29.956514 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 14 23:40:29.956524 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 14 23:40:29.956535 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 23:40:29.956553 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 23:40:29.956564 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 14 23:40:29.956574 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 23:40:29.956585 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 23:40:29.956595 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 23:40:29.956606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 23:40:29.956616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 23:40:29.956627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 23:40:29.956637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 23:40:29.956651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 23:40:29.956661 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 23:40:29.956672 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 23:40:29.956683 kernel: TSC deadline timer available May 14 23:40:29.956693 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 14 23:40:29.956703 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 23:40:29.956714 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 23:40:29.956728 kernel: kvm-guest: setup PV sched yield May 14 23:40:29.956739 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 14 23:40:29.956753 kernel: Booting paravirtualized kernel on KVM May 14 23:40:29.956764 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 23:40:29.956775 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 23:40:29.956785 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 14 23:40:29.956796 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 14 23:40:29.956806 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 23:40:29.956817 kernel: kvm-guest: PV spinlocks enabled May 14 23:40:29.956827 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 23:40:29.956839 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 14 23:40:29.956854 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:40:29.956864 kernel: random: crng init done May 14 23:40:29.956874 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:40:29.956885 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:40:29.956896 kernel: Fallback order for Node 0: 0 May 14 23:40:29.956906 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 14 23:40:29.956917 kernel: Policy zone: DMA32 May 14 23:40:29.956928 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:40:29.956942 kernel: Memory: 2430492K/2571752K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 141000K reserved, 0K cma-reserved) May 14 23:40:29.956953 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 23:40:29.956963 kernel: ftrace: allocating 37993 entries in 149 pages May 14 23:40:29.956974 kernel: ftrace: allocated 149 pages with 4 groups May 14 23:40:29.956984 kernel: Dynamic Preempt: voluntary May 14 23:40:29.956995 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:40:29.957007 kernel: rcu: RCU event tracing is enabled. May 14 23:40:29.957018 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 23:40:29.957029 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:40:29.957055 kernel: Rude variant of Tasks RCU enabled. May 14 23:40:29.957066 kernel: Tracing variant of Tasks RCU enabled. May 14 23:40:29.957077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:40:29.957091 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 23:40:29.957101 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 23:40:29.957112 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:40:29.957123 kernel: Console: colour VGA+ 80x25 May 14 23:40:29.957134 kernel: printk: console [ttyS0] enabled May 14 23:40:29.957144 kernel: ACPI: Core revision 20230628 May 14 23:40:29.957155 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 23:40:29.957169 kernel: APIC: Switch to symmetric I/O mode setup May 14 23:40:29.957179 kernel: x2apic enabled May 14 23:40:29.957189 kernel: APIC: Switched APIC routing to: physical x2apic May 14 23:40:29.957199 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 23:40:29.957210 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 23:40:29.957220 kernel: kvm-guest: setup PV IPIs May 14 23:40:29.957244 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 23:40:29.957254 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 23:40:29.957265 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 14 23:40:29.957275 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 23:40:29.957286 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 23:40:29.957300 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 23:40:29.957311 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 23:40:29.957321 kernel: Spectre V2 : Mitigation: Retpolines May 14 23:40:29.957332 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 23:40:29.957343 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 23:40:29.957356 kernel: RETBleed: Mitigation: untrained return thunk May 14 23:40:29.957371 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 23:40:29.957382 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 23:40:29.957392 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 23:40:29.957404 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 23:40:29.957415 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 23:40:29.957426 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 23:40:29.957436 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 23:40:29.957451 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 23:40:29.957461 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 23:40:29.957472 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 23:40:29.957499 kernel: Freeing SMP alternatives memory: 32K May 14 23:40:29.957510 kernel: pid_max: default: 32768 minimum: 301 May 14 23:40:29.957521 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:40:29.957532 kernel: landlock: Up and running. May 14 23:40:29.957543 kernel: SELinux: Initializing. May 14 23:40:29.957553 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:40:29.957569 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:40:29.957579 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 23:40:29.957590 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:40:29.957601 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:40:29.957612 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:40:29.957623 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 23:40:29.957633 kernel: ... version: 0 May 14 23:40:29.957648 kernel: ... bit width: 48 May 14 23:40:29.957659 kernel: ... generic registers: 6 May 14 23:40:29.957674 kernel: ... value mask: 0000ffffffffffff May 14 23:40:29.957685 kernel: ... max period: 00007fffffffffff May 14 23:40:29.957696 kernel: ... fixed-purpose events: 0 May 14 23:40:29.957706 kernel: ... event mask: 000000000000003f May 14 23:40:29.957717 kernel: signal: max sigframe size: 1776 May 14 23:40:29.957728 kernel: rcu: Hierarchical SRCU implementation. May 14 23:40:29.957739 kernel: rcu: Max phase no-delay instances is 400. May 14 23:40:29.957750 kernel: smp: Bringing up secondary CPUs ... May 14 23:40:29.957761 kernel: smpboot: x86: Booting SMP configuration: May 14 23:40:29.957776 kernel: .... node #0, CPUs: #1 #2 #3 May 14 23:40:29.957787 kernel: smp: Brought up 1 node, 4 CPUs May 14 23:40:29.957797 kernel: smpboot: Max logical packages: 1 May 14 23:40:29.957808 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 14 23:40:29.957818 kernel: devtmpfs: initialized May 14 23:40:29.957829 kernel: x86/mm: Memory block size: 128MB May 14 23:40:29.957841 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:40:29.957851 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 23:40:29.957861 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:40:29.957876 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:40:29.957887 kernel: audit: initializing netlink subsys (disabled) May 14 23:40:29.957898 kernel: audit: type=2000 audit(1747266028.607:1): state=initialized audit_enabled=0 res=1 May 14 23:40:29.957908 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:40:29.957919 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 23:40:29.957930 kernel: cpuidle: using governor menu May 14 23:40:29.957942 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:40:29.957953 kernel: dca service started, version 1.12.1 May 14 23:40:29.957964 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 14 23:40:29.957979 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 14 23:40:29.957990 kernel: PCI: Using configuration type 1 for base access May 14 23:40:29.958001 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 23:40:29.958012 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:40:29.958023 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:40:29.958045 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:40:29.958057 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:40:29.958068 kernel: ACPI: Added _OSI(Module Device) May 14 23:40:29.958079 kernel: ACPI: Added _OSI(Processor Device) May 14 23:40:29.958095 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:40:29.958106 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:40:29.958118 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:40:29.958129 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 23:40:29.958140 kernel: ACPI: Interpreter enabled May 14 23:40:29.958151 kernel: ACPI: PM: (supports S0 S3 S5) May 14 23:40:29.958163 kernel: ACPI: Using IOAPIC for interrupt routing May 14 23:40:29.958175 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 23:40:29.958186 kernel: PCI: Using E820 reservations for host bridge windows May 14 23:40:29.958201 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 23:40:29.958213 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:40:29.958594 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:40:29.958786 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 23:40:29.958963 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 23:40:29.958980 kernel: PCI host bridge to bus 0000:00 May 14 23:40:29.959187 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 23:40:29.959346 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 23:40:29.959515 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 23:40:29.959661 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 14 23:40:29.959805 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 23:40:29.959955 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 14 23:40:29.960122 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:40:29.960310 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 23:40:29.960475 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 14 23:40:29.960645 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 14 23:40:29.960777 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 14 23:40:29.960905 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 14 23:40:29.961043 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 23:40:29.961196 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 14 23:40:29.961333 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 14 23:40:29.961462 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 14 23:40:29.961665 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 14 23:40:29.961855 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 14 23:40:29.962014 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 14 23:40:29.962182 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 14 23:40:29.962341 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 14 23:40:29.962546 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 14 23:40:29.962709 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 14 23:40:29.962868 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 14 23:40:29.963028 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 14 23:40:29.963201 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 14 23:40:29.963384 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 23:40:29.963561 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 23:40:29.963724 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 11718 usecs May 14 23:40:29.963871 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 23:40:29.964026 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 14 23:40:29.964198 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 14 23:40:29.964463 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 23:40:29.964728 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 14 23:40:29.964742 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 23:40:29.964756 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 23:40:29.964765 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 23:40:29.964773 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 23:40:29.964781 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 23:40:29.964789 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 23:40:29.964797 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 23:40:29.964805 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 23:40:29.964813 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 23:40:29.964821 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 23:40:29.964832 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 23:40:29.964840 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 23:40:29.964848 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 23:40:29.964857 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 23:40:29.964865 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 23:40:29.964873 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 23:40:29.964881 kernel: iommu: Default domain type: Translated May 14 23:40:29.964889 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 23:40:29.964897 kernel: PCI: Using ACPI for IRQ routing May 14 23:40:29.964908 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 23:40:29.964916 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 23:40:29.964924 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 14 23:40:29.965068 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 23:40:29.965199 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 23:40:29.965328 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 23:40:29.965339 kernel: vgaarb: loaded May 14 23:40:29.965347 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 23:40:29.965359 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 23:40:29.965367 kernel: clocksource: Switched to clocksource kvm-clock May 14 23:40:29.965376 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:40:29.965384 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:40:29.965392 kernel: pnp: PnP ACPI init May 14 23:40:29.965584 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 14 23:40:29.965599 kernel: pnp: PnP ACPI: found 6 devices May 14 23:40:29.965608 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 23:40:29.965620 kernel: NET: Registered PF_INET protocol family May 14 23:40:29.965629 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:40:29.965637 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:40:29.965645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:40:29.965654 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:40:29.965662 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:40:29.965671 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:40:29.965679 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:40:29.965687 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:40:29.965698 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:40:29.965706 kernel: NET: Registered PF_XDP protocol family May 14 23:40:29.965833 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 23:40:29.965953 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 23:40:29.966106 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 23:40:29.966250 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 14 23:40:29.966386 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 14 23:40:29.966544 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 14 23:40:29.966563 kernel: PCI: CLS 0 bytes, default 64 May 14 23:40:29.966571 kernel: Initialise system trusted keyrings May 14 23:40:29.966579 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:40:29.966588 kernel: Key type asymmetric registered May 14 23:40:29.966597 kernel: Asymmetric key parser 'x509' registered May 14 23:40:29.966605 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 23:40:29.966613 kernel: io scheduler mq-deadline registered May 14 23:40:29.966621 kernel: io scheduler kyber registered May 14 23:40:29.966629 kernel: io scheduler bfq registered May 14 23:40:29.966637 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 23:40:29.966649 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 23:40:29.966658 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 23:40:29.966666 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 23:40:29.966674 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:40:29.966682 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 23:40:29.966691 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 23:40:29.966699 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 23:40:29.966707 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 23:40:29.966873 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 23:40:29.967024 kernel: rtc_cmos 00:04: registered as rtc0 May 14 23:40:29.967048 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 23:40:29.967179 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T23:40:29 UTC (1747266029) May 14 23:40:29.967305 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 14 23:40:29.967316 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 23:40:29.967324 kernel: NET: Registered PF_INET6 protocol family May 14 23:40:29.967333 kernel: Segment Routing with IPv6 May 14 23:40:29.967341 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:40:29.967353 kernel: NET: Registered PF_PACKET protocol family May 14 23:40:29.967362 kernel: Key type dns_resolver registered May 14 23:40:29.967370 kernel: IPI shorthand broadcast: enabled May 14 23:40:29.967378 kernel: sched_clock: Marking stable (778002707, 109959119)->(914285601, -26323775) May 14 23:40:29.967386 kernel: registered taskstats version 1 May 14 23:40:29.967394 kernel: Loading compiled-in X.509 certificates May 14 23:40:29.967403 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 4f9bc5b8797c7efeb1fcd74892dea83a6cb9d390' May 14 23:40:29.967411 kernel: Key type .fscrypt registered May 14 23:40:29.967419 kernel: Key type fscrypt-provisioning registered May 14 23:40:29.967429 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:40:29.967438 kernel: ima: Allocated hash algorithm: sha1 May 14 23:40:29.967446 kernel: ima: No architecture policies found May 14 23:40:29.967454 kernel: clk: Disabling unused clocks May 14 23:40:29.967462 kernel: Freeing unused kernel image (initmem) memory: 43604K May 14 23:40:29.967470 kernel: Write protecting the kernel read-only data: 40960k May 14 23:40:29.967478 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 14 23:40:29.967504 kernel: Run /init as init process May 14 23:40:29.967519 kernel: with arguments: May 14 23:40:29.967529 kernel: /init May 14 23:40:29.967539 kernel: with environment: May 14 23:40:29.967549 kernel: HOME=/ May 14 23:40:29.967558 kernel: TERM=linux May 14 23:40:29.967566 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:40:29.967575 systemd[1]: Successfully made /usr/ read-only. May 14 23:40:29.967587 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:40:29.967599 systemd[1]: Detected virtualization kvm. May 14 23:40:29.967608 systemd[1]: Detected architecture x86-64. May 14 23:40:29.967616 systemd[1]: Running in initrd. May 14 23:40:29.967624 systemd[1]: No hostname configured, using default hostname. May 14 23:40:29.967633 systemd[1]: Hostname set to . May 14 23:40:29.967642 systemd[1]: Initializing machine ID from VM UUID. May 14 23:40:29.967650 systemd[1]: Queued start job for default target initrd.target. May 14 23:40:29.967659 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:40:29.967670 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:40:29.967692 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:40:29.967703 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:40:29.967712 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:40:29.967722 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:40:29.967734 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:40:29.967744 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:40:29.967753 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:40:29.967762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:40:29.967770 systemd[1]: Reached target paths.target - Path Units. May 14 23:40:29.967779 systemd[1]: Reached target slices.target - Slice Units. May 14 23:40:29.967788 systemd[1]: Reached target swap.target - Swaps. May 14 23:40:29.967797 systemd[1]: Reached target timers.target - Timer Units. May 14 23:40:29.967808 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:40:29.967817 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:40:29.967826 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:40:29.967835 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:40:29.967844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:40:29.967853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:40:29.967862 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:40:29.967871 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:40:29.967879 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:40:29.967891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:40:29.967900 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:40:29.967909 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:40:29.967918 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:40:29.967926 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:40:29.967935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:40:29.967944 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:40:29.967953 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:40:29.967965 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:40:29.967974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:40:29.968022 systemd-journald[192]: Collecting audit messages is disabled. May 14 23:40:29.968053 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:40:29.968063 systemd-journald[192]: Journal started May 14 23:40:29.968087 systemd-journald[192]: Runtime Journal (/run/log/journal/c8e0dc1d321f4081b8fe77abf4b50931) is 6M, max 48.3M, 42.3M free. May 14 23:40:29.943795 systemd-modules-load[193]: Inserted module 'overlay' May 14 23:40:29.981995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:40:29.982025 kernel: Bridge firewalling registered May 14 23:40:29.972299 systemd-modules-load[193]: Inserted module 'br_netfilter' May 14 23:40:29.984361 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:40:29.984996 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:40:29.988306 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:40:29.991179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:40:29.992648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:40:29.996289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:40:30.008893 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:40:30.012534 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:40:30.014595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:40:30.024729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:40:30.026933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:40:30.044719 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:40:30.047283 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:40:30.070966 dracut-cmdline[230]: dracut-dracut-053 May 14 23:40:30.074327 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 14 23:40:30.082277 systemd-resolved[219]: Positive Trust Anchors: May 14 23:40:30.082292 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:40:30.082323 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:40:30.084930 systemd-resolved[219]: Defaulting to hostname 'linux'. May 14 23:40:30.086132 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:40:30.092136 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:40:30.158515 kernel: SCSI subsystem initialized May 14 23:40:30.167509 kernel: Loading iSCSI transport class v2.0-870. May 14 23:40:30.177505 kernel: iscsi: registered transport (tcp) May 14 23:40:30.198512 kernel: iscsi: registered transport (qla4xxx) May 14 23:40:30.198542 kernel: QLogic iSCSI HBA Driver May 14 23:40:30.254442 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:40:30.258474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:40:30.301455 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:40:30.301578 kernel: device-mapper: uevent: version 1.0.3 May 14 23:40:30.301595 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:40:30.351543 kernel: raid6: avx2x4 gen() 22659 MB/s May 14 23:40:30.368514 kernel: raid6: avx2x2 gen() 30807 MB/s May 14 23:40:30.390991 kernel: raid6: avx2x1 gen() 25608 MB/s May 14 23:40:30.391037 kernel: raid6: using algorithm avx2x2 gen() 30807 MB/s May 14 23:40:30.408719 kernel: raid6: .... xor() 19666 MB/s, rmw enabled May 14 23:40:30.408766 kernel: raid6: using avx2x2 recovery algorithm May 14 23:40:30.430543 kernel: xor: automatically using best checksumming function avx May 14 23:40:30.589540 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:40:30.604910 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:40:30.612104 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:40:30.640353 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 14 23:40:30.646224 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:40:30.652649 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:40:30.679460 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 14 23:40:30.722144 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:40:30.726560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:40:30.819578 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:40:30.823860 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:40:30.851387 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:40:30.854669 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:40:30.857378 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:40:30.859838 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:40:30.863185 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 23:40:30.868091 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:40:30.873410 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 23:40:30.873643 kernel: cryptd: max_cpu_qlen set to 1000 May 14 23:40:30.883517 kernel: libata version 3.00 loaded. May 14 23:40:30.899593 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:40:30.907195 kernel: AVX2 version of gcm_enc/dec engaged. May 14 23:40:30.907219 kernel: AES CTR mode by8 optimization enabled May 14 23:40:30.907230 kernel: ahci 0000:00:1f.2: version 3.0 May 14 23:40:30.907429 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 23:40:30.907442 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:40:30.907454 kernel: GPT:9289727 != 19775487 May 14 23:40:30.899677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:40:30.915757 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:40:30.915774 kernel: GPT:9289727 != 19775487 May 14 23:40:30.915784 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:40:30.915801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:40:30.915812 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 23:40:30.915986 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 23:40:30.916152 kernel: scsi host0: ahci May 14 23:40:30.916345 kernel: scsi host1: ahci May 14 23:40:30.916520 kernel: scsi host2: ahci May 14 23:40:30.901722 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:40:30.919076 kernel: scsi host3: ahci May 14 23:40:30.919295 kernel: scsi host4: ahci May 14 23:40:30.901969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:40:30.931351 kernel: scsi host5: ahci May 14 23:40:30.931678 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 May 14 23:40:30.931692 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 May 14 23:40:30.931703 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 May 14 23:40:30.931713 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 May 14 23:40:30.931725 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 May 14 23:40:30.931735 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 May 14 23:40:30.902804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:40:30.904592 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:40:30.919467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:40:30.938673 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:40:30.939028 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:40:30.947623 kernel: BTRFS: device fsid 267fa270-7a71-43aa-9209-0280512688b5 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (458) May 14 23:40:30.952561 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) May 14 23:40:30.966105 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 23:40:31.001498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:40:31.024214 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 23:40:31.037274 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 23:40:31.038532 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 23:40:31.052146 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:40:31.063651 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:40:31.067034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:40:31.102018 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:40:31.228648 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 23:40:31.228733 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 23:40:31.228745 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 23:40:31.230504 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 23:40:31.230533 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 23:40:31.231508 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 23:40:31.232760 kernel: ata3.00: applying bridge limits May 14 23:40:31.232773 kernel: ata3.00: configured for UDMA/100 May 14 23:40:31.233511 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 23:40:31.240526 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 23:40:31.249200 disk-uuid[554]: Primary Header is updated. May 14 23:40:31.249200 disk-uuid[554]: Secondary Entries is updated. May 14 23:40:31.249200 disk-uuid[554]: Secondary Header is updated. May 14 23:40:31.256518 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:40:31.264511 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:40:31.317280 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 23:40:31.317609 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:40:31.333506 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 23:40:32.265513 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:40:32.265578 disk-uuid[563]: The operation has completed successfully. May 14 23:40:32.304571 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:40:32.304703 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:40:32.334538 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:40:32.353575 sh[592]: Success May 14 23:40:32.376528 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 23:40:32.539611 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:40:32.544364 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:40:32.564012 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:40:32.576644 kernel: BTRFS info (device dm-0): first mount of filesystem 267fa270-7a71-43aa-9209-0280512688b5 May 14 23:40:32.576686 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 23:40:32.576705 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:40:32.577816 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:40:32.578648 kernel: BTRFS info (device dm-0): using free space tree May 14 23:40:32.584226 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:40:32.585378 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:40:32.586315 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:40:32.589660 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:40:32.621171 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:40:32.621239 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:40:32.621257 kernel: BTRFS info (device vda6): using free space tree May 14 23:40:32.625541 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:40:32.630502 kernel: BTRFS info (device vda6): last unmount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:40:32.716069 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:40:32.719016 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:40:32.761912 systemd-networkd[768]: lo: Link UP May 14 23:40:32.761922 systemd-networkd[768]: lo: Gained carrier May 14 23:40:32.763692 systemd-networkd[768]: Enumeration completed May 14 23:40:32.763787 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:40:32.764044 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:40:32.764048 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:40:32.774388 systemd-networkd[768]: eth0: Link UP May 14 23:40:32.774392 systemd-networkd[768]: eth0: Gained carrier May 14 23:40:32.774398 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:40:32.776038 systemd[1]: Reached target network.target - Network. May 14 23:40:32.791535 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:40:32.954203 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:40:32.957537 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:40:33.146633 ignition[773]: Ignition 2.20.0 May 14 23:40:33.146644 ignition[773]: Stage: fetch-offline May 14 23:40:33.146709 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 14 23:40:33.146721 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:40:33.146833 ignition[773]: parsed url from cmdline: "" May 14 23:40:33.146837 ignition[773]: no config URL provided May 14 23:40:33.146843 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:40:33.146852 ignition[773]: no config at "/usr/lib/ignition/user.ign" May 14 23:40:33.146895 ignition[773]: op(1): [started] loading QEMU firmware config module May 14 23:40:33.146903 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 23:40:33.193984 ignition[773]: op(1): [finished] loading QEMU firmware config module May 14 23:40:33.233752 ignition[773]: parsing config with SHA512: c8fdc089b118b997b8801a5cfe7f4367c16fe005e8d2c605bace809dfe4dff60c94e9c6dd4a49f44bcbca3c13bdbe77826701106fb07c5cb009a6c148b057b24 May 14 23:40:33.244977 unknown[773]: fetched base config from "system" May 14 23:40:33.244995 unknown[773]: fetched user config from "qemu" May 14 23:40:33.247015 ignition[773]: fetch-offline: fetch-offline passed May 14 23:40:33.247929 ignition[773]: Ignition finished successfully May 14 23:40:33.251226 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:40:33.252729 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 23:40:33.253831 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:40:33.316122 ignition[783]: Ignition 2.20.0 May 14 23:40:33.316136 ignition[783]: Stage: kargs May 14 23:40:33.316309 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 14 23:40:33.316326 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:40:33.317153 ignition[783]: kargs: kargs passed May 14 23:40:33.320678 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:40:33.317208 ignition[783]: Ignition finished successfully May 14 23:40:33.322841 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:40:33.348804 ignition[791]: Ignition 2.20.0 May 14 23:40:33.348814 ignition[791]: Stage: disks May 14 23:40:33.351887 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:40:33.348967 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 14 23:40:33.366541 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:40:33.348984 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:40:33.368025 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:40:33.349809 ignition[791]: disks: disks passed May 14 23:40:33.370172 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:40:33.349852 ignition[791]: Ignition finished successfully May 14 23:40:33.371197 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:40:33.373054 systemd[1]: Reached target basic.target - Basic System. May 14 23:40:33.374864 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:40:33.415807 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:40:33.670960 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:40:33.675053 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:40:33.779508 kernel: EXT4-fs (vda9): mounted filesystem 81735587-bac5-4d9e-ae49-5642e655af7f r/w with ordered data mode. Quota mode: none. May 14 23:40:33.780410 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:40:33.781568 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:40:33.784516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:40:33.786228 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:40:33.788800 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:40:33.788860 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:40:33.788895 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:40:33.805141 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:40:33.808066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:40:33.816105 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) May 14 23:40:33.816180 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:40:33.816193 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:40:33.817023 kernel: BTRFS info (device vda6): using free space tree May 14 23:40:33.820518 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:40:33.834676 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:40:33.870683 systemd-networkd[768]: eth0: Gained IPv6LL May 14 23:40:33.998885 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:40:34.034407 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory May 14 23:40:34.040040 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:40:34.082631 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:40:34.188088 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:40:34.190558 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:40:34.193790 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:40:34.252673 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:40:34.254300 kernel: BTRFS info (device vda6): last unmount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:40:34.265917 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:40:34.342367 ignition[926]: INFO : Ignition 2.20.0 May 14 23:40:34.342367 ignition[926]: INFO : Stage: mount May 14 23:40:34.344500 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:40:34.344500 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:40:34.348033 ignition[926]: INFO : mount: mount passed May 14 23:40:34.348978 ignition[926]: INFO : Ignition finished successfully May 14 23:40:34.352264 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:40:34.353826 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:40:34.783220 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:40:34.816213 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (936) May 14 23:40:34.816261 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:40:34.816272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:40:34.818012 kernel: BTRFS info (device vda6): using free space tree May 14 23:40:34.820512 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:40:34.822853 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:40:34.862902 ignition[953]: INFO : Ignition 2.20.0 May 14 23:40:34.862902 ignition[953]: INFO : Stage: files May 14 23:40:34.864890 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:40:34.864890 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:40:34.864890 ignition[953]: DEBUG : files: compiled without relabeling support, skipping May 14 23:40:34.868790 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:40:34.868790 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:40:34.871783 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:40:34.873447 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:40:34.875423 unknown[953]: wrote ssh authorized keys file for user: core May 14 23:40:34.876691 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:40:34.878351 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 23:40:34.880576 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 23:40:34.965167 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:40:35.413345 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 23:40:35.413345 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 23:40:35.417726 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 14 23:40:35.795266 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 23:40:36.247016 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 23:40:36.247016 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 23:40:36.251114 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:40:36.251114 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:40:36.251114 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 23:40:36.251114 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 23:40:36.251114 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:40:36.251114 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:40:36.251114 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 23:40:36.251114 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 23:40:36.268471 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:40:36.272904 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:40:36.274479 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 23:40:36.274479 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 23:40:36.274479 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:40:36.274479 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:40:36.274479 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:40:36.274479 ignition[953]: INFO : files: files passed May 14 23:40:36.274479 ignition[953]: INFO : Ignition finished successfully May 14 23:40:36.275920 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:40:36.278175 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:40:36.280415 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:40:36.293156 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:40:36.293266 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:40:36.296597 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory May 14 23:40:36.298006 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:40:36.298006 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:40:36.302304 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:40:36.300818 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:40:36.302558 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:40:36.305651 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:40:36.362770 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:40:36.362962 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:40:36.365715 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:40:36.367995 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:40:36.370400 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:40:36.371859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:40:36.404417 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:40:36.407402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:40:36.429284 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:40:36.432085 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:40:36.434633 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:40:36.436536 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:40:36.437640 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:40:36.440780 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:40:36.443034 systemd[1]: Stopped target basic.target - Basic System. May 14 23:40:36.444938 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:40:36.447285 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:40:36.450092 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:40:36.452613 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:40:36.455183 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:40:36.458290 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:40:36.460826 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:40:36.462992 systemd[1]: Stopped target swap.target - Swaps. May 14 23:40:36.464782 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:40:36.466018 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:40:36.468500 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:40:36.471011 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:40:36.473662 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:40:36.474838 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:40:36.477631 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:40:36.478951 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:40:36.481721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:40:36.483131 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:40:36.486108 systemd[1]: Stopped target paths.target - Path Units. May 14 23:40:36.488345 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:40:36.489736 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:40:36.493171 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:40:36.495412 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:40:36.497590 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:40:36.498520 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:40:36.500558 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:40:36.501527 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:40:36.503948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:40:36.505389 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:40:36.508382 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:40:36.509495 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:40:36.512632 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:40:36.515954 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:40:36.518135 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:40:36.519351 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:40:36.521798 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:40:36.523006 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:40:36.530432 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:40:36.530625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:40:36.544980 ignition[1009]: INFO : Ignition 2.20.0 May 14 23:40:36.544980 ignition[1009]: INFO : Stage: umount May 14 23:40:36.546790 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:40:36.546790 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:40:36.546790 ignition[1009]: INFO : umount: umount passed May 14 23:40:36.546790 ignition[1009]: INFO : Ignition finished successfully May 14 23:40:36.552617 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:40:36.552769 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:40:36.553668 systemd[1]: Stopped target network.target - Network. May 14 23:40:36.553938 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:40:36.553993 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:40:36.554308 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:40:36.554364 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:40:36.554996 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:40:36.555050 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:40:36.555334 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:40:36.555387 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:40:36.556219 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:40:36.565941 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:40:36.576605 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:40:36.576789 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:40:36.582189 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:40:36.582546 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:40:36.582679 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:40:36.588129 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:40:36.590399 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:40:36.590986 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:40:36.591030 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:40:36.593544 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:40:36.595623 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:40:36.595689 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:40:36.598650 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:40:36.598719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:40:36.601245 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:40:36.602756 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:40:36.607908 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:40:36.607976 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:40:36.611570 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:40:36.615149 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:40:36.615235 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:40:36.618852 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:40:36.620146 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:40:36.623607 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:40:36.623712 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:40:36.627318 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:40:36.635748 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:40:36.639406 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:40:36.639511 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:40:36.640209 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:40:36.640263 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:40:36.643022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:40:36.643093 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:40:36.643829 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:40:36.643917 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:40:36.644556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:40:36.644629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:40:36.653001 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:40:36.653416 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:40:36.653504 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:40:36.658349 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 23:40:36.658418 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:40:36.658978 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:40:36.659042 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:40:36.659312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:40:36.659385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:40:36.667648 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:40:36.667739 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:40:36.675769 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:40:36.675945 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:40:36.684869 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:40:36.685035 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:40:36.685699 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:40:36.686953 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:40:36.710097 systemd[1]: Switching root. May 14 23:40:36.745440 systemd-journald[192]: Journal stopped May 14 23:40:38.436212 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 14 23:40:38.436288 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:40:38.436308 kernel: SELinux: policy capability open_perms=1 May 14 23:40:38.436320 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:40:38.436332 kernel: SELinux: policy capability always_check_network=0 May 14 23:40:38.436344 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:40:38.436356 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:40:38.436368 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:40:38.436388 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:40:38.436407 kernel: audit: type=1403 audit(1747266037.561:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:40:38.436422 systemd[1]: Successfully loaded SELinux policy in 41.120ms. May 14 23:40:38.436443 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.568ms. May 14 23:40:38.436456 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:40:38.436469 systemd[1]: Detected virtualization kvm. May 14 23:40:38.436496 systemd[1]: Detected architecture x86-64. May 14 23:40:38.436509 systemd[1]: Detected first boot. May 14 23:40:38.436522 systemd[1]: Initializing machine ID from VM UUID. May 14 23:40:38.436538 zram_generator::config[1057]: No configuration found. May 14 23:40:38.436552 kernel: Guest personality initialized and is inactive May 14 23:40:38.436563 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 23:40:38.436575 kernel: Initialized host personality May 14 23:40:38.436591 kernel: NET: Registered PF_VSOCK protocol family May 14 23:40:38.436604 systemd[1]: Populated /etc with preset unit settings. May 14 23:40:38.436617 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:40:38.436633 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:40:38.436647 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:40:38.436660 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:40:38.436673 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:40:38.436686 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:40:38.436698 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:40:38.436710 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:40:38.436725 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:40:38.436737 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:40:38.436750 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:40:38.436765 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:40:38.436778 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:40:38.436791 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:40:38.436803 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:40:38.436816 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:40:38.436828 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:40:38.436848 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:40:38.436865 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 23:40:38.436878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:40:38.436890 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:40:38.436903 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:40:38.436917 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:40:38.436930 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:40:38.436943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:40:38.436955 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:40:38.436974 systemd[1]: Reached target slices.target - Slice Units. May 14 23:40:38.436990 systemd[1]: Reached target swap.target - Swaps. May 14 23:40:38.437002 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:40:38.437016 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:40:38.437029 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:40:38.437041 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:40:38.437054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:40:38.437066 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:40:38.437079 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:40:38.437092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:40:38.437104 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:40:38.437119 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:40:38.437131 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:40:38.437144 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:40:38.437156 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:40:38.437169 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:40:38.437182 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:40:38.437195 systemd[1]: Reached target machines.target - Containers. May 14 23:40:38.437207 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:40:38.437223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:40:38.437236 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:40:38.437248 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:40:38.437262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:40:38.437285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:40:38.437306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:40:38.437323 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:40:38.437338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:40:38.437359 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:40:38.437376 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:40:38.437391 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:40:38.437404 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:40:38.437417 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:40:38.437430 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:40:38.437443 kernel: fuse: init (API version 7.39) May 14 23:40:38.437455 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:40:38.437467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:40:38.437505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:40:38.437518 kernel: loop: module loaded May 14 23:40:38.437530 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:40:38.437569 systemd-journald[1139]: Collecting audit messages is disabled. May 14 23:40:38.437593 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:40:38.437610 systemd-journald[1139]: Journal started May 14 23:40:38.437633 systemd-journald[1139]: Runtime Journal (/run/log/journal/c8e0dc1d321f4081b8fe77abf4b50931) is 6M, max 48.3M, 42.3M free. May 14 23:40:38.437675 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:40:38.193895 systemd[1]: Queued start job for default target multi-user.target. May 14 23:40:38.207588 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 23:40:38.208099 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:40:38.441111 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:40:38.441937 systemd[1]: Stopped verity-setup.service. May 14 23:40:38.444275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:40:38.448920 kernel: ACPI: bus type drm_connector registered May 14 23:40:38.448986 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:40:38.452431 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:40:38.454420 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:40:38.455664 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:40:38.457704 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:40:38.458926 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:40:38.460203 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:40:38.461532 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:40:38.463153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:40:38.464737 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:40:38.464988 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:40:38.466472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:40:38.466724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:40:38.468162 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:40:38.468383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:40:38.469753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:40:38.469981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:40:38.471861 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:40:38.472152 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:40:38.473816 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:40:38.474085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:40:38.475529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:40:38.477085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:40:38.478738 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:40:38.480359 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:40:38.494240 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:40:38.497105 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:40:38.499590 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:40:38.500765 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:40:38.500793 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:40:38.502868 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:40:38.517856 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:40:38.520290 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:40:38.521675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:40:38.524497 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:40:38.528578 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:40:38.530227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:40:38.531961 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:40:38.534450 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:40:38.536876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:40:38.539904 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:40:38.542990 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:40:38.547034 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:40:38.551139 systemd-journald[1139]: Time spent on flushing to /var/log/journal/c8e0dc1d321f4081b8fe77abf4b50931 is 13.751ms for 968 entries. May 14 23:40:38.551139 systemd-journald[1139]: System Journal (/var/log/journal/c8e0dc1d321f4081b8fe77abf4b50931) is 8M, max 195.6M, 187.6M free. May 14 23:40:38.622941 systemd-journald[1139]: Received client request to flush runtime journal. May 14 23:40:38.623018 kernel: loop0: detected capacity change from 0 to 109808 May 14 23:40:38.557355 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:40:38.560141 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:40:38.574994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:40:38.588184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:40:38.618437 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:40:38.668565 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:40:38.678718 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:40:38.681573 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:40:38.685640 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:40:38.691504 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:40:38.694897 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. May 14 23:40:38.695142 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. May 14 23:40:38.710068 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:40:38.717131 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:40:38.718612 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 23:40:38.727502 kernel: loop1: detected capacity change from 0 to 210664 May 14 23:40:38.749052 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:40:38.768671 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:40:38.774654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:40:38.789533 kernel: loop2: detected capacity change from 0 to 151640 May 14 23:40:38.812565 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. May 14 23:40:38.812586 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. May 14 23:40:38.818731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:40:38.849579 kernel: loop3: detected capacity change from 0 to 109808 May 14 23:40:38.858528 kernel: loop4: detected capacity change from 0 to 210664 May 14 23:40:38.868527 kernel: loop5: detected capacity change from 0 to 151640 May 14 23:40:38.884119 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 23:40:38.884773 (sd-merge)[1205]: Merged extensions into '/usr'. May 14 23:40:38.889003 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:40:38.889023 systemd[1]: Reloading... May 14 23:40:38.987873 zram_generator::config[1229]: No configuration found. May 14 23:40:39.044026 ldconfig[1172]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:40:39.113781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:40:39.179991 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:40:39.180632 systemd[1]: Reloading finished in 291 ms. May 14 23:40:39.201047 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:40:39.203149 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:40:39.252219 systemd[1]: Starting ensure-sysext.service... May 14 23:40:39.254267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:40:39.270051 systemd[1]: Reload requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... May 14 23:40:39.270069 systemd[1]: Reloading... May 14 23:40:39.286976 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:40:39.287262 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:40:39.288397 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:40:39.288715 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 14 23:40:39.288802 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 14 23:40:39.293070 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:40:39.293084 systemd-tmpfiles[1271]: Skipping /boot May 14 23:40:39.311286 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:40:39.312851 systemd-tmpfiles[1271]: Skipping /boot May 14 23:40:39.346643 zram_generator::config[1303]: No configuration found. May 14 23:40:39.469476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:40:39.537976 systemd[1]: Reloading finished in 267 ms. May 14 23:40:39.551949 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:40:39.571455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:40:39.581852 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:40:39.584680 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:40:39.598410 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:40:39.602938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:40:39.605714 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:40:39.611689 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:40:39.616308 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:40:39.616528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:40:39.626702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:40:39.629823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:40:39.633731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:40:39.634376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:40:39.634504 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:40:39.634600 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:40:39.645842 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:40:39.648531 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:40:39.652346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:40:39.652642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:40:39.654533 augenrules[1368]: No rules May 14 23:40:39.655048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:40:39.655277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:40:39.663047 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:40:39.663335 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:40:39.665328 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:40:39.665658 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:40:39.665932 systemd-udevd[1344]: Using default interface naming scheme 'v255'. May 14 23:40:39.674439 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:40:39.678648 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:40:39.686624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:40:39.688235 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:40:39.689547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:40:39.693688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:40:39.703939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:40:39.709687 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:40:39.718156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:40:39.719501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:40:39.719620 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:40:39.723553 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:40:39.724666 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:40:39.724773 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:40:39.726195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:40:39.728343 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:40:39.738936 augenrules[1380]: /sbin/augenrules: No change May 14 23:40:39.738225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:40:39.738522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:40:39.742573 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:40:39.742827 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:40:39.744572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:40:39.744790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:40:39.748317 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:40:39.748574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:40:39.751095 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:40:39.757163 augenrules[1426]: No rules May 14 23:40:39.757865 systemd[1]: Finished ensure-sysext.service. May 14 23:40:39.759408 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:40:39.759714 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:40:39.778607 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 23:40:39.782185 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:40:39.783577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:40:39.783647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:40:39.789527 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1382) May 14 23:40:39.789581 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:40:39.850521 systemd-resolved[1342]: Positive Trust Anchors: May 14 23:40:39.850921 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:40:39.850956 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:40:39.854854 systemd-resolved[1342]: Defaulting to hostname 'linux'. May 14 23:40:39.857358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:40:39.858930 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:40:39.884872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:40:39.886520 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 14 23:40:39.892340 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:40:39.898526 kernel: ACPI: button: Power Button [PWRF] May 14 23:40:39.905462 systemd-networkd[1439]: lo: Link UP May 14 23:40:39.905477 systemd-networkd[1439]: lo: Gained carrier May 14 23:40:39.911162 systemd-networkd[1439]: Enumeration completed May 14 23:40:39.911273 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:40:39.912762 systemd[1]: Reached target network.target - Network. May 14 23:40:39.915886 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:40:39.920676 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:40:39.926650 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:40:39.926658 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:40:39.928508 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 14 23:40:39.928429 systemd-networkd[1439]: eth0: Link UP May 14 23:40:39.928442 systemd-networkd[1439]: eth0: Gained carrier May 14 23:40:39.928468 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:40:39.933830 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 23:40:39.934113 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 23:40:39.934311 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 23:40:39.935446 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:40:39.939946 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:40:39.942910 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:40:39.943720 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. May 14 23:40:39.945150 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 23:40:39.945258 systemd-timesyncd[1440]: Initial clock synchronization to Wed 2025-05-14 23:40:39.833040 UTC. May 14 23:40:39.963334 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:40:39.982108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:40:39.997940 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:40:40.074527 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:40:40.091815 kernel: kvm_amd: TSC scaling supported May 14 23:40:40.091891 kernel: kvm_amd: Nested Virtualization enabled May 14 23:40:40.091905 kernel: kvm_amd: Nested Paging enabled May 14 23:40:40.092913 kernel: kvm_amd: LBR virtualization supported May 14 23:40:40.092930 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 23:40:40.094528 kernel: kvm_amd: Virtual GIF supported May 14 23:40:40.113555 kernel: EDAC MC: Ver: 3.0.0 May 14 23:40:40.146389 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:40:40.153334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:40:40.157416 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:40:40.181072 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:40:40.219030 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:40:40.220816 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:40:40.222026 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:40:40.223334 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:40:40.224827 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:40:40.226319 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:40:40.227694 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:40:40.229125 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:40:40.230581 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:40:40.230618 systemd[1]: Reached target paths.target - Path Units. May 14 23:40:40.231660 systemd[1]: Reached target timers.target - Timer Units. May 14 23:40:40.233690 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:40:40.236794 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:40:40.240730 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:40:40.242351 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:40:40.243790 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:40:40.247970 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:40:40.249569 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:40:40.252291 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:40:40.254283 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:40:40.255504 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:40:40.256455 systemd[1]: Reached target basic.target - Basic System. May 14 23:40:40.257426 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:40:40.257456 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:40:40.258584 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:40:40.260778 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:40:40.265564 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:40:40.268224 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:40:40.269455 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:40:40.271673 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:40:40.272002 jq[1474]: false May 14 23:40:40.273211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:40:40.276149 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:40:40.282128 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:40:40.287235 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:40:40.293225 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:40:40.293514 extend-filesystems[1475]: Found loop3 May 14 23:40:40.293514 extend-filesystems[1475]: Found loop4 May 14 23:40:40.297003 extend-filesystems[1475]: Found loop5 May 14 23:40:40.297003 extend-filesystems[1475]: Found sr0 May 14 23:40:40.297003 extend-filesystems[1475]: Found vda May 14 23:40:40.297003 extend-filesystems[1475]: Found vda1 May 14 23:40:40.297003 extend-filesystems[1475]: Found vda2 May 14 23:40:40.297003 extend-filesystems[1475]: Found vda3 May 14 23:40:40.297003 extend-filesystems[1475]: Found usr May 14 23:40:40.297003 extend-filesystems[1475]: Found vda4 May 14 23:40:40.297003 extend-filesystems[1475]: Found vda6 May 14 23:40:40.297003 extend-filesystems[1475]: Found vda7 May 14 23:40:40.297003 extend-filesystems[1475]: Found vda9 May 14 23:40:40.297003 extend-filesystems[1475]: Checking size of /dev/vda9 May 14 23:40:40.298912 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:40:40.301848 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:40:40.303618 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:40:40.314708 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:40:40.320274 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:40:40.323239 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:40:40.323643 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:40:40.324095 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:40:40.324442 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:40:40.325750 jq[1493]: true May 14 23:40:40.327684 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:40:40.328738 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:40:40.339135 dbus-daemon[1473]: [system] SELinux support is enabled May 14 23:40:40.339794 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:40:40.341850 extend-filesystems[1475]: Resized partition /dev/vda9 May 14 23:40:40.348770 extend-filesystems[1503]: resize2fs 1.47.2 (1-Jan-2025) May 14 23:40:40.352101 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:40:40.362938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1389) May 14 23:40:40.367141 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:40:40.367182 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:40:40.373273 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:40:40.373303 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:40:40.378413 jq[1498]: true May 14 23:40:40.378727 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 23:40:40.390543 tar[1497]: linux-amd64/helm May 14 23:40:40.392897 update_engine[1490]: I20250514 23:40:40.392808 1490 main.cc:92] Flatcar Update Engine starting May 14 23:40:40.405643 update_engine[1490]: I20250514 23:40:40.394361 1490 update_check_scheduler.cc:74] Next update check in 2m32s May 14 23:40:40.406623 systemd[1]: Started update-engine.service - Update Engine. May 14 23:40:40.415106 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:40:40.501546 systemd-logind[1486]: Watching system buttons on /dev/input/event1 (Power Button) May 14 23:40:40.501600 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 23:40:40.502125 systemd-logind[1486]: New seat seat0. May 14 23:40:40.504349 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:40:40.512430 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:40:40.536869 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:40:40.570354 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:40:40.573727 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:40:40.606276 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:40:40.606665 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:40:40.610202 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:40:40.614537 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 23:40:40.642729 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:40:40.646653 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:40:40.649696 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 23:40:40.651121 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:40:40.668493 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 23:40:40.668493 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:40:40.668493 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 23:40:40.672180 extend-filesystems[1475]: Resized filesystem in /dev/vda9 May 14 23:40:40.675729 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:40:40.676122 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:40:40.677123 bash[1527]: Updated "/home/core/.ssh/authorized_keys" May 14 23:40:40.678298 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:40:40.702408 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:40:40.980023 containerd[1499]: time="2025-05-14T23:40:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 23:40:40.982005 containerd[1499]: time="2025-05-14T23:40:40.981965038Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 23:40:40.996904 containerd[1499]: time="2025-05-14T23:40:40.996837350Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.903µs" May 14 23:40:40.996904 containerd[1499]: time="2025-05-14T23:40:40.996880905Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 23:40:40.996904 containerd[1499]: time="2025-05-14T23:40:40.996904999Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 23:40:40.997218 containerd[1499]: time="2025-05-14T23:40:40.997182062Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 23:40:40.997218 containerd[1499]: time="2025-05-14T23:40:40.997208250Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 23:40:40.997292 containerd[1499]: time="2025-05-14T23:40:40.997248426Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 23:40:40.997390 containerd[1499]: time="2025-05-14T23:40:40.997353010Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 23:40:40.997390 containerd[1499]: time="2025-05-14T23:40:40.997375918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 23:40:40.997797 containerd[1499]: time="2025-05-14T23:40:40.997758297Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 23:40:40.997797 containerd[1499]: time="2025-05-14T23:40:40.997778795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 23:40:40.997797 containerd[1499]: time="2025-05-14T23:40:40.997793414Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 23:40:40.997906 containerd[1499]: time="2025-05-14T23:40:40.997804419Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 23:40:40.997986 containerd[1499]: time="2025-05-14T23:40:40.997950928Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 23:40:40.998342 containerd[1499]: time="2025-05-14T23:40:40.998303947Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 23:40:40.998400 containerd[1499]: time="2025-05-14T23:40:40.998350771Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 23:40:40.998400 containerd[1499]: time="2025-05-14T23:40:40.998367012Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 23:40:40.998455 containerd[1499]: time="2025-05-14T23:40:40.998399473Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 23:40:40.998739 containerd[1499]: time="2025-05-14T23:40:40.998687353Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 23:40:40.998817 containerd[1499]: time="2025-05-14T23:40:40.998793814Z" level=info msg="metadata content store policy set" policy=shared May 14 23:40:41.006042 containerd[1499]: time="2025-05-14T23:40:41.005964114Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006099244Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006119047Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006134129Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006161422Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006178473Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006197197Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006213890Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006226607Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 23:40:41.006240 containerd[1499]: time="2025-05-14T23:40:41.006239007Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 23:40:41.006509 containerd[1499]: time="2025-05-14T23:40:41.006250398Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 23:40:41.006509 containerd[1499]: time="2025-05-14T23:40:41.006264163Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 23:40:41.006565 containerd[1499]: time="2025-05-14T23:40:41.006540946Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 23:40:41.006595 containerd[1499]: time="2025-05-14T23:40:41.006579570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 23:40:41.006615 containerd[1499]: time="2025-05-14T23:40:41.006599610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 23:40:41.006645 containerd[1499]: time="2025-05-14T23:40:41.006620035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 23:40:41.006645 containerd[1499]: time="2025-05-14T23:40:41.006639362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 23:40:41.006682 containerd[1499]: time="2025-05-14T23:40:41.006652534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 23:40:41.006682 containerd[1499]: time="2025-05-14T23:40:41.006664785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 23:40:41.006682 containerd[1499]: time="2025-05-14T23:40:41.006676611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 23:40:41.006745 containerd[1499]: time="2025-05-14T23:40:41.006689337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 23:40:41.006745 containerd[1499]: time="2025-05-14T23:40:41.006701658Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 23:40:41.006745 containerd[1499]: time="2025-05-14T23:40:41.006731703Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 23:40:41.006842 containerd[1499]: time="2025-05-14T23:40:41.006813572Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 23:40:41.006842 containerd[1499]: time="2025-05-14T23:40:41.006838422Z" level=info msg="Start snapshots syncer" May 14 23:40:41.006888 containerd[1499]: time="2025-05-14T23:40:41.006880569Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 23:40:41.007227 containerd[1499]: time="2025-05-14T23:40:41.007164784Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 23:40:41.007438 containerd[1499]: time="2025-05-14T23:40:41.007253552Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 23:40:41.007438 containerd[1499]: time="2025-05-14T23:40:41.007335383Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 23:40:41.007546 containerd[1499]: time="2025-05-14T23:40:41.007463883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 23:40:41.007598 containerd[1499]: time="2025-05-14T23:40:41.007547148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 23:40:41.007598 containerd[1499]: time="2025-05-14T23:40:41.007573314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 23:40:41.007598 containerd[1499]: time="2025-05-14T23:40:41.007589672Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 23:40:41.007653 containerd[1499]: time="2025-05-14T23:40:41.007605456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 23:40:41.007653 containerd[1499]: time="2025-05-14T23:40:41.007630473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 23:40:41.007653 containerd[1499]: time="2025-05-14T23:40:41.007642240Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 23:40:41.007705 containerd[1499]: time="2025-05-14T23:40:41.007668524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 23:40:41.007705 containerd[1499]: time="2025-05-14T23:40:41.007692373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007714273Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007752274Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007766257Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007775867Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007785792Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007794105Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007804673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007815223Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007833154Z" level=info msg="runtime interface created" May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007839300Z" level=info msg="created NRI interface" May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007848058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007859181Z" level=info msg="Connect containerd service" May 14 23:40:41.007927 containerd[1499]: time="2025-05-14T23:40:41.007882981Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:40:41.008744 containerd[1499]: time="2025-05-14T23:40:41.008713519Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:40:41.132168 tar[1497]: linux-amd64/LICENSE May 14 23:40:41.132168 tar[1497]: linux-amd64/README.md May 14 23:40:41.163289 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:40:41.180898 containerd[1499]: time="2025-05-14T23:40:41.180795978Z" level=info msg="Start subscribing containerd event" May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181298678Z" level=info msg="Start recovering state" May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181634322Z" level=info msg="Start event monitor" May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181663813Z" level=info msg="Start cni network conf syncer for default" May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181685802Z" level=info msg="Start streaming server" May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181700517Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181712442Z" level=info msg="runtime interface starting up..." May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181731155Z" level=info msg="starting plugins..." May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181752878Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 23:40:41.181870 containerd[1499]: time="2025-05-14T23:40:41.181334947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:40:41.182187 containerd[1499]: time="2025-05-14T23:40:41.182153915Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:40:41.182769 containerd[1499]: time="2025-05-14T23:40:41.182416647Z" level=info msg="containerd successfully booted in 0.203089s" May 14 23:40:41.182573 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:40:41.549730 systemd-networkd[1439]: eth0: Gained IPv6LL May 14 23:40:41.553116 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:40:41.555029 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:40:41.557708 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 23:40:41.561812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:41.572492 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:40:41.599397 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 23:40:41.599994 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 23:40:41.602052 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:40:41.604750 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:40:42.785695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:42.787497 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:40:42.790086 systemd[1]: Startup finished in 933ms (kernel) + 7.832s (initrd) + 5.268s (userspace) = 14.034s. May 14 23:40:42.816941 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:40:43.238619 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:40:43.240372 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:57642.service - OpenSSH per-connection server daemon (10.0.0.1:57642). May 14 23:40:43.405143 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 57642 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:40:43.407651 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:43.423459 systemd-logind[1486]: New session 1 of user core. May 14 23:40:43.425233 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:40:43.426963 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:40:43.526634 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:40:43.529750 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:40:43.555338 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:40:43.558239 systemd-logind[1486]: New session c1 of user core. May 14 23:40:43.590747 kubelet[1600]: E0514 23:40:43.590598 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:40:43.596595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:40:43.596842 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:40:43.597270 systemd[1]: kubelet.service: Consumed 1.798s CPU time, 244.3M memory peak. May 14 23:40:43.842777 systemd[1617]: Queued start job for default target default.target. May 14 23:40:43.882134 systemd[1617]: Created slice app.slice - User Application Slice. May 14 23:40:43.882164 systemd[1617]: Reached target paths.target - Paths. May 14 23:40:43.882209 systemd[1617]: Reached target timers.target - Timers. May 14 23:40:43.884190 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:40:43.897228 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:40:43.897367 systemd[1617]: Reached target sockets.target - Sockets. May 14 23:40:43.897412 systemd[1617]: Reached target basic.target - Basic System. May 14 23:40:43.897456 systemd[1617]: Reached target default.target - Main User Target. May 14 23:40:43.897524 systemd[1617]: Startup finished in 329ms. May 14 23:40:43.898112 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:40:43.900252 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:40:43.968043 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:57648.service - OpenSSH per-connection server daemon (10.0.0.1:57648). May 14 23:40:44.027249 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 57648 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:40:44.029144 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:44.033513 systemd-logind[1486]: New session 2 of user core. May 14 23:40:44.043621 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:40:44.099663 sshd[1631]: Connection closed by 10.0.0.1 port 57648 May 14 23:40:44.100161 sshd-session[1629]: pam_unix(sshd:session): session closed for user core May 14 23:40:44.109987 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:57648.service: Deactivated successfully. May 14 23:40:44.112276 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:40:44.114064 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. May 14 23:40:44.115991 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:57662.service - OpenSSH per-connection server daemon (10.0.0.1:57662). May 14 23:40:44.117105 systemd-logind[1486]: Removed session 2. May 14 23:40:44.176967 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 57662 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:40:44.178740 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:44.183412 systemd-logind[1486]: New session 3 of user core. May 14 23:40:44.191618 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:40:44.242629 sshd[1639]: Connection closed by 10.0.0.1 port 57662 May 14 23:40:44.243031 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 14 23:40:44.254918 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:57662.service: Deactivated successfully. May 14 23:40:44.257074 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:40:44.258903 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. May 14 23:40:44.260299 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:57668.service - OpenSSH per-connection server daemon (10.0.0.1:57668). May 14 23:40:44.261614 systemd-logind[1486]: Removed session 3. May 14 23:40:44.312102 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 57668 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:40:44.314018 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:44.319540 systemd-logind[1486]: New session 4 of user core. May 14 23:40:44.329661 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:40:44.385255 sshd[1647]: Connection closed by 10.0.0.1 port 57668 May 14 23:40:44.385614 sshd-session[1644]: pam_unix(sshd:session): session closed for user core May 14 23:40:44.398658 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:57668.service: Deactivated successfully. May 14 23:40:44.400938 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:40:44.403013 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. May 14 23:40:44.404325 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:57670.service - OpenSSH per-connection server daemon (10.0.0.1:57670). May 14 23:40:44.405250 systemd-logind[1486]: Removed session 4. May 14 23:40:44.458985 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 57670 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:40:44.460918 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:44.466008 systemd-logind[1486]: New session 5 of user core. May 14 23:40:44.480021 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:40:44.540670 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:40:44.541022 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:40:44.563210 sudo[1656]: pam_unix(sudo:session): session closed for user root May 14 23:40:44.565207 sshd[1655]: Connection closed by 10.0.0.1 port 57670 May 14 23:40:44.565608 sshd-session[1652]: pam_unix(sshd:session): session closed for user core May 14 23:40:44.576848 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:57670.service: Deactivated successfully. May 14 23:40:44.578825 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:40:44.580952 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. May 14 23:40:44.582541 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:57680.service - OpenSSH per-connection server daemon (10.0.0.1:57680). May 14 23:40:44.583561 systemd-logind[1486]: Removed session 5. May 14 23:40:44.639943 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 57680 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:40:44.641769 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:44.646394 systemd-logind[1486]: New session 6 of user core. May 14 23:40:44.660614 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:40:44.713908 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:40:44.714346 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:40:44.718518 sudo[1666]: pam_unix(sudo:session): session closed for user root May 14 23:40:44.724581 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:40:44.724913 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:40:44.735767 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:40:44.787516 augenrules[1688]: No rules May 14 23:40:44.790034 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:40:44.790443 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:40:44.791797 sudo[1665]: pam_unix(sudo:session): session closed for user root May 14 23:40:44.793756 sshd[1664]: Connection closed by 10.0.0.1 port 57680 May 14 23:40:44.794192 sshd-session[1661]: pam_unix(sshd:session): session closed for user core May 14 23:40:44.806585 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:57680.service: Deactivated successfully. May 14 23:40:44.808789 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:40:44.810872 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. May 14 23:40:44.812215 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:57688.service - OpenSSH per-connection server daemon (10.0.0.1:57688). May 14 23:40:44.813133 systemd-logind[1486]: Removed session 6. May 14 23:40:44.868126 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 57688 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:40:44.869774 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:44.874459 systemd-logind[1486]: New session 7 of user core. May 14 23:40:44.884609 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:40:44.937947 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:40:44.938354 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:40:45.611337 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:40:45.624804 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:40:46.142599 dockerd[1720]: time="2025-05-14T23:40:46.142528469Z" level=info msg="Starting up" May 14 23:40:46.146938 dockerd[1720]: time="2025-05-14T23:40:46.146883158Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 23:40:46.662274 dockerd[1720]: time="2025-05-14T23:40:46.662216432Z" level=info msg="Loading containers: start." May 14 23:40:46.886528 kernel: Initializing XFRM netlink socket May 14 23:40:46.969555 systemd-networkd[1439]: docker0: Link UP May 14 23:40:47.050092 dockerd[1720]: time="2025-05-14T23:40:47.050005293Z" level=info msg="Loading containers: done." May 14 23:40:47.105927 dockerd[1720]: time="2025-05-14T23:40:47.105839649Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:40:47.106203 dockerd[1720]: time="2025-05-14T23:40:47.105975873Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 23:40:47.106203 dockerd[1720]: time="2025-05-14T23:40:47.106150277Z" level=info msg="Daemon has completed initialization" May 14 23:40:47.191630 dockerd[1720]: time="2025-05-14T23:40:47.191543277Z" level=info msg="API listen on /run/docker.sock" May 14 23:40:47.191874 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:40:48.669509 containerd[1499]: time="2025-05-14T23:40:48.669433993Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 23:40:49.352695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083181551.mount: Deactivated successfully. May 14 23:40:50.610930 containerd[1499]: time="2025-05-14T23:40:50.610837503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:50.611573 containerd[1499]: time="2025-05-14T23:40:50.611501639Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 14 23:40:50.613001 containerd[1499]: time="2025-05-14T23:40:50.612956420Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:50.616202 containerd[1499]: time="2025-05-14T23:40:50.616140866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:50.617046 containerd[1499]: time="2025-05-14T23:40:50.616991733Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.947508073s" May 14 23:40:50.617046 containerd[1499]: time="2025-05-14T23:40:50.617036642Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 14 23:40:50.642167 containerd[1499]: time="2025-05-14T23:40:50.642111263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 23:40:52.501708 containerd[1499]: time="2025-05-14T23:40:52.501625938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:52.502393 containerd[1499]: time="2025-05-14T23:40:52.502319331Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 14 23:40:52.504063 containerd[1499]: time="2025-05-14T23:40:52.504026007Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:52.507080 containerd[1499]: time="2025-05-14T23:40:52.507031144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:52.508125 containerd[1499]: time="2025-05-14T23:40:52.508046433Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.865880173s" May 14 23:40:52.508125 containerd[1499]: time="2025-05-14T23:40:52.508102351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 14 23:40:52.528976 containerd[1499]: time="2025-05-14T23:40:52.528932442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 23:40:53.768244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:40:53.770641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:54.104763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:54.114024 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:40:54.170316 kubelet[2028]: E0514 23:40:54.170222 2028 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:40:54.178135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:40:54.178364 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:40:54.178812 systemd[1]: kubelet.service: Consumed 267ms CPU time, 98.6M memory peak. May 14 23:40:54.509136 containerd[1499]: time="2025-05-14T23:40:54.508924505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:54.510578 containerd[1499]: time="2025-05-14T23:40:54.510506238Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 14 23:40:54.512578 containerd[1499]: time="2025-05-14T23:40:54.512544571Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:54.515525 containerd[1499]: time="2025-05-14T23:40:54.515466833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:54.516600 containerd[1499]: time="2025-05-14T23:40:54.516544977Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.987568499s" May 14 23:40:54.516600 containerd[1499]: time="2025-05-14T23:40:54.516602261Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 14 23:40:54.536167 containerd[1499]: time="2025-05-14T23:40:54.536113376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 23:40:55.867235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086222302.mount: Deactivated successfully. May 14 23:40:56.149957 containerd[1499]: time="2025-05-14T23:40:56.149754664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:56.151126 containerd[1499]: time="2025-05-14T23:40:56.151027779Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 14 23:40:56.152557 containerd[1499]: time="2025-05-14T23:40:56.152515542Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:56.154784 containerd[1499]: time="2025-05-14T23:40:56.154737209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:56.155357 containerd[1499]: time="2025-05-14T23:40:56.155313428Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.619149601s" May 14 23:40:56.155357 containerd[1499]: time="2025-05-14T23:40:56.155354728Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 14 23:40:56.178655 containerd[1499]: time="2025-05-14T23:40:56.178601031Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 23:40:56.686098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826760961.mount: Deactivated successfully. May 14 23:40:57.872918 containerd[1499]: time="2025-05-14T23:40:57.872842851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:57.874142 containerd[1499]: time="2025-05-14T23:40:57.873655391Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 23:40:57.875521 containerd[1499]: time="2025-05-14T23:40:57.875258182Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:57.879945 containerd[1499]: time="2025-05-14T23:40:57.879867526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:57.880722 containerd[1499]: time="2025-05-14T23:40:57.880679847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.702029011s" May 14 23:40:57.880722 containerd[1499]: time="2025-05-14T23:40:57.880722505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 23:40:57.902638 containerd[1499]: time="2025-05-14T23:40:57.902574847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 23:40:58.371868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099482067.mount: Deactivated successfully. May 14 23:40:58.377966 containerd[1499]: time="2025-05-14T23:40:58.377869979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:58.378725 containerd[1499]: time="2025-05-14T23:40:58.378674280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 14 23:40:58.379922 containerd[1499]: time="2025-05-14T23:40:58.379877487Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:58.381871 containerd[1499]: time="2025-05-14T23:40:58.381841398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:58.382505 containerd[1499]: time="2025-05-14T23:40:58.382457425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 479.832584ms" May 14 23:40:58.382505 containerd[1499]: time="2025-05-14T23:40:58.382497499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 23:40:58.402982 containerd[1499]: time="2025-05-14T23:40:58.402914968Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 23:40:58.903436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount145419305.mount: Deactivated successfully. May 14 23:41:00.613232 containerd[1499]: time="2025-05-14T23:41:00.613155872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:00.614073 containerd[1499]: time="2025-05-14T23:41:00.614020498Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 14 23:41:00.615236 containerd[1499]: time="2025-05-14T23:41:00.615192111Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:00.618086 containerd[1499]: time="2025-05-14T23:41:00.618049178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:00.619278 containerd[1499]: time="2025-05-14T23:41:00.619244143Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.216283144s" May 14 23:41:00.619365 containerd[1499]: time="2025-05-14T23:41:00.619278343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 23:41:03.454200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:03.454394 systemd[1]: kubelet.service: Consumed 267ms CPU time, 98.6M memory peak. May 14 23:41:03.456976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:03.484600 systemd[1]: Reload requested from client PID 2270 ('systemctl') (unit session-7.scope)... May 14 23:41:03.484629 systemd[1]: Reloading... May 14 23:41:03.600594 zram_generator::config[2313]: No configuration found. May 14 23:41:03.871414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:41:04.006638 systemd[1]: Reloading finished in 521 ms. May 14 23:41:04.075746 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 23:41:04.075891 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 23:41:04.076558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:04.076628 systemd[1]: kubelet.service: Consumed 150ms CPU time, 83.6M memory peak. May 14 23:41:04.080741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:04.292175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:04.303131 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:41:04.354692 kubelet[2362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:41:04.354692 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:41:04.354692 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:41:04.356023 kubelet[2362]: I0514 23:41:04.355966 2362 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:41:04.744210 kubelet[2362]: I0514 23:41:04.744042 2362 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 23:41:04.744210 kubelet[2362]: I0514 23:41:04.744087 2362 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:41:04.746327 kubelet[2362]: I0514 23:41:04.746298 2362 server.go:927] "Client rotation is on, will bootstrap in background" May 14 23:41:04.764957 kubelet[2362]: I0514 23:41:04.764890 2362 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:41:04.765628 kubelet[2362]: E0514 23:41:04.765553 2362 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.778673 kubelet[2362]: I0514 23:41:04.778625 2362 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:41:04.780857 kubelet[2362]: I0514 23:41:04.780572 2362 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:41:04.781016 kubelet[2362]: I0514 23:41:04.780852 2362 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 23:41:04.781120 kubelet[2362]: I0514 23:41:04.781039 2362 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:41:04.781120 kubelet[2362]: I0514 23:41:04.781049 2362 container_manager_linux.go:301] "Creating device plugin manager" May 14 23:41:04.781222 kubelet[2362]: I0514 23:41:04.781203 2362 state_mem.go:36] "Initialized new in-memory state store" May 14 23:41:04.781951 kubelet[2362]: I0514 23:41:04.781919 2362 kubelet.go:400] "Attempting to sync node with API server" May 14 23:41:04.781951 kubelet[2362]: I0514 23:41:04.781942 2362 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:41:04.782043 kubelet[2362]: I0514 23:41:04.781971 2362 kubelet.go:312] "Adding apiserver pod source" May 14 23:41:04.782043 kubelet[2362]: I0514 23:41:04.781994 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:41:04.787663 kubelet[2362]: W0514 23:41:04.787591 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.787803 kubelet[2362]: E0514 23:41:04.787770 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.787882 kubelet[2362]: W0514 23:41:04.787804 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.787882 kubelet[2362]: E0514 23:41:04.787871 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.789072 kubelet[2362]: I0514 23:41:04.788588 2362 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 23:41:04.790669 kubelet[2362]: I0514 23:41:04.790281 2362 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:41:04.790669 kubelet[2362]: W0514 23:41:04.790406 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:41:04.791477 kubelet[2362]: I0514 23:41:04.791461 2362 server.go:1264] "Started kubelet" May 14 23:41:04.791917 kubelet[2362]: I0514 23:41:04.791864 2362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:41:04.792963 kubelet[2362]: I0514 23:41:04.792937 2362 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:41:04.793069 kubelet[2362]: I0514 23:41:04.793034 2362 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:41:04.798759 kubelet[2362]: I0514 23:41:04.797146 2362 server.go:455] "Adding debug handlers to kubelet server" May 14 23:41:04.798759 kubelet[2362]: I0514 23:41:04.797406 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:41:04.798759 kubelet[2362]: E0514 23:41:04.797831 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f893a7ffcb212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 23:41:04.791417362 +0000 UTC m=+0.482836951,LastTimestamp:2025-05-14 23:41:04.791417362 +0000 UTC m=+0.482836951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 23:41:04.798759 kubelet[2362]: I0514 23:41:04.798045 2362 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 23:41:04.798759 kubelet[2362]: I0514 23:41:04.798137 2362 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:41:04.798759 kubelet[2362]: I0514 23:41:04.798196 2362 reconciler.go:26] "Reconciler: start to sync state" May 14 23:41:04.798759 kubelet[2362]: W0514 23:41:04.798594 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.798759 kubelet[2362]: E0514 23:41:04.798636 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.800022 kubelet[2362]: I0514 23:41:04.799796 2362 factory.go:221] Registration of the systemd container factory successfully May 14 23:41:04.800022 kubelet[2362]: I0514 23:41:04.799876 2362 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:41:04.800361 kubelet[2362]: E0514 23:41:04.800337 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" May 14 23:41:04.800503 kubelet[2362]: E0514 23:41:04.800458 2362 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:41:04.800907 kubelet[2362]: I0514 23:41:04.800883 2362 factory.go:221] Registration of the containerd container factory successfully May 14 23:41:04.815224 kubelet[2362]: I0514 23:41:04.815174 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:41:04.816901 kubelet[2362]: I0514 23:41:04.816876 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:41:04.817034 kubelet[2362]: I0514 23:41:04.816918 2362 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:41:04.817034 kubelet[2362]: I0514 23:41:04.816936 2362 kubelet.go:2337] "Starting kubelet main sync loop" May 14 23:41:04.817034 kubelet[2362]: E0514 23:41:04.816984 2362 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:41:04.817498 kubelet[2362]: I0514 23:41:04.817346 2362 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:41:04.817498 kubelet[2362]: I0514 23:41:04.817367 2362 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:41:04.817498 kubelet[2362]: I0514 23:41:04.817388 2362 state_mem.go:36] "Initialized new in-memory state store" May 14 23:41:04.820316 kubelet[2362]: W0514 23:41:04.820267 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.820423 kubelet[2362]: E0514 23:41:04.820393 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:04.900038 kubelet[2362]: I0514 23:41:04.899964 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:41:04.900556 kubelet[2362]: E0514 23:41:04.900501 2362 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" May 14 23:41:04.917542 kubelet[2362]: E0514 23:41:04.917510 2362 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:41:05.001417 kubelet[2362]: E0514 23:41:05.001291 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" May 14 23:41:05.101923 kubelet[2362]: I0514 23:41:05.101892 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:41:05.102392 kubelet[2362]: E0514 23:41:05.102353 2362 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" May 14 23:41:05.118446 kubelet[2362]: E0514 23:41:05.118419 2362 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:41:05.139197 kubelet[2362]: I0514 23:41:05.139138 2362 policy_none.go:49] "None policy: Start" May 14 23:41:05.140222 kubelet[2362]: I0514 23:41:05.140176 2362 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:41:05.140222 kubelet[2362]: I0514 23:41:05.140206 2362 state_mem.go:35] "Initializing new in-memory state store" May 14 23:41:05.150101 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:41:05.165282 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:41:05.178723 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:41:05.180283 kubelet[2362]: I0514 23:41:05.180243 2362 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:41:05.180581 kubelet[2362]: I0514 23:41:05.180534 2362 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:41:05.180734 kubelet[2362]: I0514 23:41:05.180700 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:41:05.181913 kubelet[2362]: E0514 23:41:05.181881 2362 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 23:41:05.402917 kubelet[2362]: E0514 23:41:05.402836 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" May 14 23:41:05.504771 kubelet[2362]: I0514 23:41:05.504706 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:41:05.505225 kubelet[2362]: E0514 23:41:05.505176 2362 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" May 14 23:41:05.519348 kubelet[2362]: I0514 23:41:05.519286 2362 topology_manager.go:215] "Topology Admit Handler" podUID="d72f20d61b138d9f8d57ab37d030c4fb" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 23:41:05.520568 kubelet[2362]: I0514 23:41:05.520546 2362 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 23:41:05.521255 kubelet[2362]: I0514 23:41:05.521231 2362 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 23:41:05.530315 systemd[1]: Created slice kubepods-burstable-podd72f20d61b138d9f8d57ab37d030c4fb.slice - libcontainer container kubepods-burstable-podd72f20d61b138d9f8d57ab37d030c4fb.slice. May 14 23:41:05.558668 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 23:41:05.577519 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 23:41:05.603841 kubelet[2362]: I0514 23:41:05.603781 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d72f20d61b138d9f8d57ab37d030c4fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d72f20d61b138d9f8d57ab37d030c4fb\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:05.603841 kubelet[2362]: I0514 23:41:05.603842 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d72f20d61b138d9f8d57ab37d030c4fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d72f20d61b138d9f8d57ab37d030c4fb\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:05.604035 kubelet[2362]: I0514 23:41:05.603861 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:05.604035 kubelet[2362]: I0514 23:41:05.603879 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:05.604035 kubelet[2362]: I0514 23:41:05.603897 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:05.604035 kubelet[2362]: I0514 23:41:05.603914 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 23:41:05.604035 kubelet[2362]: I0514 23:41:05.603926 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d72f20d61b138d9f8d57ab37d030c4fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d72f20d61b138d9f8d57ab37d030c4fb\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:05.604179 kubelet[2362]: I0514 23:41:05.603940 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:05.604179 kubelet[2362]: I0514 23:41:05.603953 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:05.670595 kubelet[2362]: W0514 23:41:05.670353 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:05.670595 kubelet[2362]: E0514 23:41:05.670441 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:05.857476 kubelet[2362]: E0514 23:41:05.857394 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:05.858209 containerd[1499]: time="2025-05-14T23:41:05.858132688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d72f20d61b138d9f8d57ab37d030c4fb,Namespace:kube-system,Attempt:0,}" May 14 23:41:05.875388 kubelet[2362]: E0514 23:41:05.875298 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:05.875766 containerd[1499]: time="2025-05-14T23:41:05.875713191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 23:41:05.880387 kubelet[2362]: E0514 23:41:05.880356 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:05.880894 containerd[1499]: time="2025-05-14T23:41:05.880847487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 23:41:05.985635 kubelet[2362]: W0514 23:41:05.985397 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:05.985635 kubelet[2362]: E0514 23:41:05.985530 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:06.172147 kubelet[2362]: W0514 23:41:06.172059 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:06.172147 kubelet[2362]: E0514 23:41:06.172144 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:06.191798 kubelet[2362]: W0514 23:41:06.191729 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:06.191798 kubelet[2362]: E0514 23:41:06.191780 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:06.203508 kubelet[2362]: E0514 23:41:06.203415 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" May 14 23:41:06.306526 kubelet[2362]: I0514 23:41:06.306458 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:41:06.306973 kubelet[2362]: E0514 23:41:06.306905 2362 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" May 14 23:41:06.780904 kubelet[2362]: E0514 23:41:06.780873 2362 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused May 14 23:41:06.949570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262117189.mount: Deactivated successfully. May 14 23:41:06.956735 containerd[1499]: time="2025-05-14T23:41:06.956677931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:41:06.960409 containerd[1499]: time="2025-05-14T23:41:06.960333263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 23:41:06.961569 containerd[1499]: time="2025-05-14T23:41:06.961520144Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:41:06.962590 containerd[1499]: time="2025-05-14T23:41:06.962532484Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:41:06.963410 containerd[1499]: time="2025-05-14T23:41:06.963337975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 23:41:06.964472 containerd[1499]: time="2025-05-14T23:41:06.964429520Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:41:06.965143 containerd[1499]: time="2025-05-14T23:41:06.965075462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 23:41:06.966858 containerd[1499]: time="2025-05-14T23:41:06.966809061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:41:06.968715 containerd[1499]: time="2025-05-14T23:41:06.968667302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 520.542547ms" May 14 23:41:06.969304 containerd[1499]: time="2025-05-14T23:41:06.969261088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 522.504496ms" May 14 23:41:06.972284 containerd[1499]: time="2025-05-14T23:41:06.972256118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.639629ms" May 14 23:41:07.005566 containerd[1499]: time="2025-05-14T23:41:07.005504681Z" level=info msg="connecting to shim 5439b4db92079784a8b06c8c3b2e8b6c67bbc7abb04622a311dbe0ef8a2cc308" address="unix:///run/containerd/s/1ccfb08cf3f142dc91b8e73be10b3b5fa39d3703a73a32d104213d3914d0fe5f" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:07.011387 containerd[1499]: time="2025-05-14T23:41:07.011330400Z" level=info msg="connecting to shim ba6239081fb431115479349e731d29b1f97aafc2e0650a53cda7c494d03d06d6" address="unix:///run/containerd/s/082422fc166bbfae7401cd55162e664a5d0e3bc66c44f05fad9e05e2fd81c6fa" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:07.028256 containerd[1499]: time="2025-05-14T23:41:07.027658896Z" level=info msg="connecting to shim c9c069ee038abea448375aa4d6bada6d3d5a2d1bb6c9a9ccc766fe68708ac607" address="unix:///run/containerd/s/ae7825e5bf477715bb22f4a23382c7a145b985b1878f1a118d7a6c737b903838" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:07.049087 systemd[1]: Started cri-containerd-5439b4db92079784a8b06c8c3b2e8b6c67bbc7abb04622a311dbe0ef8a2cc308.scope - libcontainer container 5439b4db92079784a8b06c8c3b2e8b6c67bbc7abb04622a311dbe0ef8a2cc308. May 14 23:41:07.056848 systemd[1]: Started cri-containerd-ba6239081fb431115479349e731d29b1f97aafc2e0650a53cda7c494d03d06d6.scope - libcontainer container ba6239081fb431115479349e731d29b1f97aafc2e0650a53cda7c494d03d06d6. May 14 23:41:07.071655 systemd[1]: Started cri-containerd-c9c069ee038abea448375aa4d6bada6d3d5a2d1bb6c9a9ccc766fe68708ac607.scope - libcontainer container c9c069ee038abea448375aa4d6bada6d3d5a2d1bb6c9a9ccc766fe68708ac607. May 14 23:41:07.159260 containerd[1499]: time="2025-05-14T23:41:07.159214348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"5439b4db92079784a8b06c8c3b2e8b6c67bbc7abb04622a311dbe0ef8a2cc308\"" May 14 23:41:07.160452 kubelet[2362]: E0514 23:41:07.160430 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:07.162871 containerd[1499]: time="2025-05-14T23:41:07.162829418Z" level=info msg="CreateContainer within sandbox \"5439b4db92079784a8b06c8c3b2e8b6c67bbc7abb04622a311dbe0ef8a2cc308\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:41:07.164041 containerd[1499]: time="2025-05-14T23:41:07.163986059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba6239081fb431115479349e731d29b1f97aafc2e0650a53cda7c494d03d06d6\"" May 14 23:41:07.164594 kubelet[2362]: E0514 23:41:07.164555 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:07.166437 containerd[1499]: time="2025-05-14T23:41:07.166407272Z" level=info msg="CreateContainer within sandbox \"ba6239081fb431115479349e731d29b1f97aafc2e0650a53cda7c494d03d06d6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:41:07.175995 containerd[1499]: time="2025-05-14T23:41:07.175505057Z" level=info msg="Container 6604ee96b8a0ef6de5143994eaa810343e709503159bdf05cc026ac8eb64c8ce: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:07.176191 containerd[1499]: time="2025-05-14T23:41:07.176163733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d72f20d61b138d9f8d57ab37d030c4fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9c069ee038abea448375aa4d6bada6d3d5a2d1bb6c9a9ccc766fe68708ac607\"" May 14 23:41:07.176919 kubelet[2362]: E0514 23:41:07.176889 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:07.178983 containerd[1499]: time="2025-05-14T23:41:07.178952296Z" level=info msg="CreateContainer within sandbox \"c9c069ee038abea448375aa4d6bada6d3d5a2d1bb6c9a9ccc766fe68708ac607\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:41:07.179395 containerd[1499]: time="2025-05-14T23:41:07.179351683Z" level=info msg="Container 49675b2bcfbadc72868632486b808a393c9a134820c98cd1ddc66daf27617989: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:07.189803 containerd[1499]: time="2025-05-14T23:41:07.189770535Z" level=info msg="CreateContainer within sandbox \"5439b4db92079784a8b06c8c3b2e8b6c67bbc7abb04622a311dbe0ef8a2cc308\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6604ee96b8a0ef6de5143994eaa810343e709503159bdf05cc026ac8eb64c8ce\"" May 14 23:41:07.190551 containerd[1499]: time="2025-05-14T23:41:07.190328831Z" level=info msg="StartContainer for \"6604ee96b8a0ef6de5143994eaa810343e709503159bdf05cc026ac8eb64c8ce\"" May 14 23:41:07.191469 containerd[1499]: time="2025-05-14T23:41:07.191436879Z" level=info msg="connecting to shim 6604ee96b8a0ef6de5143994eaa810343e709503159bdf05cc026ac8eb64c8ce" address="unix:///run/containerd/s/1ccfb08cf3f142dc91b8e73be10b3b5fa39d3703a73a32d104213d3914d0fe5f" protocol=ttrpc version=3 May 14 23:41:07.194626 containerd[1499]: time="2025-05-14T23:41:07.194528706Z" level=info msg="CreateContainer within sandbox \"ba6239081fb431115479349e731d29b1f97aafc2e0650a53cda7c494d03d06d6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49675b2bcfbadc72868632486b808a393c9a134820c98cd1ddc66daf27617989\"" May 14 23:41:07.195054 containerd[1499]: time="2025-05-14T23:41:07.195008784Z" level=info msg="StartContainer for \"49675b2bcfbadc72868632486b808a393c9a134820c98cd1ddc66daf27617989\"" May 14 23:41:07.196415 containerd[1499]: time="2025-05-14T23:41:07.196360618Z" level=info msg="connecting to shim 49675b2bcfbadc72868632486b808a393c9a134820c98cd1ddc66daf27617989" address="unix:///run/containerd/s/082422fc166bbfae7401cd55162e664a5d0e3bc66c44f05fad9e05e2fd81c6fa" protocol=ttrpc version=3 May 14 23:41:07.196902 containerd[1499]: time="2025-05-14T23:41:07.196857592Z" level=info msg="Container 6fc919ac8f8693fe411887ca306b739ce19558d7127bdeb6d1df6a91ed6da1bc: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:07.203997 containerd[1499]: time="2025-05-14T23:41:07.203906450Z" level=info msg="CreateContainer within sandbox \"c9c069ee038abea448375aa4d6bada6d3d5a2d1bb6c9a9ccc766fe68708ac607\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6fc919ac8f8693fe411887ca306b739ce19558d7127bdeb6d1df6a91ed6da1bc\"" May 14 23:41:07.204378 containerd[1499]: time="2025-05-14T23:41:07.204352117Z" level=info msg="StartContainer for \"6fc919ac8f8693fe411887ca306b739ce19558d7127bdeb6d1df6a91ed6da1bc\"" May 14 23:41:07.206176 containerd[1499]: time="2025-05-14T23:41:07.206135075Z" level=info msg="connecting to shim 6fc919ac8f8693fe411887ca306b739ce19558d7127bdeb6d1df6a91ed6da1bc" address="unix:///run/containerd/s/ae7825e5bf477715bb22f4a23382c7a145b985b1878f1a118d7a6c737b903838" protocol=ttrpc version=3 May 14 23:41:07.212714 systemd[1]: Started cri-containerd-6604ee96b8a0ef6de5143994eaa810343e709503159bdf05cc026ac8eb64c8ce.scope - libcontainer container 6604ee96b8a0ef6de5143994eaa810343e709503159bdf05cc026ac8eb64c8ce. May 14 23:41:07.216384 systemd[1]: Started cri-containerd-49675b2bcfbadc72868632486b808a393c9a134820c98cd1ddc66daf27617989.scope - libcontainer container 49675b2bcfbadc72868632486b808a393c9a134820c98cd1ddc66daf27617989. May 14 23:41:07.228611 systemd[1]: Started cri-containerd-6fc919ac8f8693fe411887ca306b739ce19558d7127bdeb6d1df6a91ed6da1bc.scope - libcontainer container 6fc919ac8f8693fe411887ca306b739ce19558d7127bdeb6d1df6a91ed6da1bc. May 14 23:41:07.283261 containerd[1499]: time="2025-05-14T23:41:07.283165061Z" level=info msg="StartContainer for \"6604ee96b8a0ef6de5143994eaa810343e709503159bdf05cc026ac8eb64c8ce\" returns successfully" May 14 23:41:07.295500 containerd[1499]: time="2025-05-14T23:41:07.294906191Z" level=info msg="StartContainer for \"49675b2bcfbadc72868632486b808a393c9a134820c98cd1ddc66daf27617989\" returns successfully" May 14 23:41:07.303580 containerd[1499]: time="2025-05-14T23:41:07.302745022Z" level=info msg="StartContainer for \"6fc919ac8f8693fe411887ca306b739ce19558d7127bdeb6d1df6a91ed6da1bc\" returns successfully" May 14 23:41:07.833263 kubelet[2362]: E0514 23:41:07.833212 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:07.847824 kubelet[2362]: E0514 23:41:07.847791 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:07.854969 kubelet[2362]: E0514 23:41:07.854939 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:07.909386 kubelet[2362]: I0514 23:41:07.909344 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:41:08.520087 kubelet[2362]: E0514 23:41:08.520017 2362 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 23:41:08.665622 kubelet[2362]: I0514 23:41:08.665572 2362 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 23:41:08.791314 kubelet[2362]: I0514 23:41:08.790601 2362 apiserver.go:52] "Watching apiserver" May 14 23:41:08.799242 kubelet[2362]: I0514 23:41:08.799054 2362 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:41:08.948734 kubelet[2362]: E0514 23:41:08.948694 2362 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 23:41:08.949249 kubelet[2362]: E0514 23:41:08.949207 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:09.864115 kubelet[2362]: E0514 23:41:09.864057 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:10.818332 systemd[1]: Reload requested from client PID 2638 ('systemctl') (unit session-7.scope)... May 14 23:41:10.818350 systemd[1]: Reloading... May 14 23:41:10.859995 kubelet[2362]: E0514 23:41:10.859953 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:10.914599 zram_generator::config[2682]: No configuration found. May 14 23:41:11.026246 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:41:11.148646 systemd[1]: Reloading finished in 329 ms. May 14 23:41:11.174993 kubelet[2362]: I0514 23:41:11.174858 2362 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:41:11.175060 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:11.190041 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:41:11.190363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:11.190422 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 115.3M memory peak. May 14 23:41:11.192527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:11.398432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:11.406825 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:41:11.449535 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:41:11.449535 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:41:11.449535 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:41:11.449980 kubelet[2727]: I0514 23:41:11.449570 2727 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:41:11.454165 kubelet[2727]: I0514 23:41:11.454129 2727 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 23:41:11.454165 kubelet[2727]: I0514 23:41:11.454152 2727 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:41:11.454349 kubelet[2727]: I0514 23:41:11.454324 2727 server.go:927] "Client rotation is on, will bootstrap in background" May 14 23:41:11.455576 kubelet[2727]: I0514 23:41:11.455549 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:41:11.456553 kubelet[2727]: I0514 23:41:11.456522 2727 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:41:11.467165 kubelet[2727]: I0514 23:41:11.467120 2727 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:41:11.467392 kubelet[2727]: I0514 23:41:11.467357 2727 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:41:11.467580 kubelet[2727]: I0514 23:41:11.467389 2727 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 23:41:11.467667 kubelet[2727]: I0514 23:41:11.467597 2727 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:41:11.467667 kubelet[2727]: I0514 23:41:11.467609 2727 container_manager_linux.go:301] "Creating device plugin manager" May 14 23:41:11.467739 kubelet[2727]: I0514 23:41:11.467668 2727 state_mem.go:36] "Initialized new in-memory state store" May 14 23:41:11.467805 kubelet[2727]: I0514 23:41:11.467793 2727 kubelet.go:400] "Attempting to sync node with API server" May 14 23:41:11.467842 kubelet[2727]: I0514 23:41:11.467806 2727 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:41:11.467842 kubelet[2727]: I0514 23:41:11.467827 2727 kubelet.go:312] "Adding apiserver pod source" May 14 23:41:11.467910 kubelet[2727]: I0514 23:41:11.467845 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:41:11.469234 kubelet[2727]: I0514 23:41:11.468445 2727 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 23:41:11.469234 kubelet[2727]: I0514 23:41:11.468648 2727 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:41:11.469234 kubelet[2727]: I0514 23:41:11.469023 2727 server.go:1264] "Started kubelet" May 14 23:41:11.470616 kubelet[2727]: I0514 23:41:11.470530 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:41:11.470866 kubelet[2727]: I0514 23:41:11.470840 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:41:11.471001 kubelet[2727]: I0514 23:41:11.470980 2727 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:41:11.471067 kubelet[2727]: I0514 23:41:11.471046 2727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:41:11.471999 kubelet[2727]: I0514 23:41:11.471970 2727 server.go:455] "Adding debug handlers to kubelet server" May 14 23:41:11.476035 kubelet[2727]: I0514 23:41:11.476009 2727 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 23:41:11.476242 kubelet[2727]: I0514 23:41:11.476220 2727 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:41:11.476457 kubelet[2727]: I0514 23:41:11.476442 2727 reconciler.go:26] "Reconciler: start to sync state" May 14 23:41:11.480308 kubelet[2727]: E0514 23:41:11.480271 2727 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:41:11.481993 kubelet[2727]: I0514 23:41:11.481971 2727 factory.go:221] Registration of the containerd container factory successfully May 14 23:41:11.481993 kubelet[2727]: I0514 23:41:11.481988 2727 factory.go:221] Registration of the systemd container factory successfully May 14 23:41:11.482117 kubelet[2727]: I0514 23:41:11.482076 2727 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:41:11.488585 kubelet[2727]: I0514 23:41:11.488543 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:41:11.490471 kubelet[2727]: I0514 23:41:11.490393 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:41:11.490682 kubelet[2727]: I0514 23:41:11.490667 2727 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:41:11.490767 kubelet[2727]: I0514 23:41:11.490755 2727 kubelet.go:2337] "Starting kubelet main sync loop" May 14 23:41:11.490904 kubelet[2727]: E0514 23:41:11.490881 2727 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:41:11.515923 kubelet[2727]: I0514 23:41:11.515883 2727 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:41:11.515923 kubelet[2727]: I0514 23:41:11.515908 2727 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:41:11.515923 kubelet[2727]: I0514 23:41:11.515933 2727 state_mem.go:36] "Initialized new in-memory state store" May 14 23:41:11.516130 kubelet[2727]: I0514 23:41:11.516105 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:41:11.516174 kubelet[2727]: I0514 23:41:11.516119 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:41:11.516174 kubelet[2727]: I0514 23:41:11.516141 2727 policy_none.go:49] "None policy: Start" May 14 23:41:11.516724 kubelet[2727]: I0514 23:41:11.516703 2727 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:41:11.516789 kubelet[2727]: I0514 23:41:11.516730 2727 state_mem.go:35] "Initializing new in-memory state store" May 14 23:41:11.516883 kubelet[2727]: I0514 23:41:11.516868 2727 state_mem.go:75] "Updated machine memory state" May 14 23:41:11.520964 kubelet[2727]: I0514 23:41:11.520935 2727 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:41:11.521206 kubelet[2727]: I0514 23:41:11.521163 2727 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:41:11.521319 kubelet[2727]: I0514 23:41:11.521299 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:41:11.580965 kubelet[2727]: I0514 23:41:11.580926 2727 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:41:11.587265 kubelet[2727]: I0514 23:41:11.587221 2727 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 23:41:11.587396 kubelet[2727]: I0514 23:41:11.587330 2727 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 23:41:11.591586 kubelet[2727]: I0514 23:41:11.591542 2727 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 23:41:11.591683 kubelet[2727]: I0514 23:41:11.591644 2727 topology_manager.go:215] "Topology Admit Handler" podUID="d72f20d61b138d9f8d57ab37d030c4fb" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 23:41:11.592464 kubelet[2727]: I0514 23:41:11.592028 2727 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 23:41:11.605527 kubelet[2727]: E0514 23:41:11.603217 2727 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 23:41:11.777502 kubelet[2727]: I0514 23:41:11.777436 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:11.777727 kubelet[2727]: I0514 23:41:11.777507 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 23:41:11.777727 kubelet[2727]: I0514 23:41:11.777534 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d72f20d61b138d9f8d57ab37d030c4fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d72f20d61b138d9f8d57ab37d030c4fb\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:11.777727 kubelet[2727]: I0514 23:41:11.777555 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d72f20d61b138d9f8d57ab37d030c4fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d72f20d61b138d9f8d57ab37d030c4fb\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:11.777727 kubelet[2727]: I0514 23:41:11.777578 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:11.777727 kubelet[2727]: I0514 23:41:11.777600 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:11.777943 kubelet[2727]: I0514 23:41:11.777620 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d72f20d61b138d9f8d57ab37d030c4fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d72f20d61b138d9f8d57ab37d030c4fb\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:11.777943 kubelet[2727]: I0514 23:41:11.777640 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:11.777943 kubelet[2727]: I0514 23:41:11.777661 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:11.904726 kubelet[2727]: E0514 23:41:11.904559 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:11.904726 kubelet[2727]: E0514 23:41:11.904654 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:11.905164 kubelet[2727]: E0514 23:41:11.905119 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:12.468974 kubelet[2727]: I0514 23:41:12.468928 2727 apiserver.go:52] "Watching apiserver" May 14 23:41:12.477174 kubelet[2727]: I0514 23:41:12.477139 2727 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:41:12.499916 kubelet[2727]: E0514 23:41:12.499712 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:12.499916 kubelet[2727]: E0514 23:41:12.499825 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:12.500002 kubelet[2727]: E0514 23:41:12.499933 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:12.576390 kubelet[2727]: I0514 23:41:12.576312 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.576279864 podStartE2EDuration="1.576279864s" podCreationTimestamp="2025-05-14 23:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:41:12.566063598 +0000 UTC m=+1.155165999" watchObservedRunningTime="2025-05-14 23:41:12.576279864 +0000 UTC m=+1.165382265" May 14 23:41:12.576624 kubelet[2727]: I0514 23:41:12.576430 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5764240470000002 podStartE2EDuration="1.576424047s" podCreationTimestamp="2025-05-14 23:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:41:12.575607623 +0000 UTC m=+1.164710024" watchObservedRunningTime="2025-05-14 23:41:12.576424047 +0000 UTC m=+1.165526448" May 14 23:41:12.592257 kubelet[2727]: I0514 23:41:12.592160 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.5921339679999997 podStartE2EDuration="3.592133968s" podCreationTimestamp="2025-05-14 23:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:41:12.584513987 +0000 UTC m=+1.173616398" watchObservedRunningTime="2025-05-14 23:41:12.592133968 +0000 UTC m=+1.181236369" May 14 23:41:13.500966 kubelet[2727]: E0514 23:41:13.500930 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:15.856791 sudo[1700]: pam_unix(sudo:session): session closed for user root May 14 23:41:15.858362 sshd[1699]: Connection closed by 10.0.0.1 port 57688 May 14 23:41:15.858860 sshd-session[1696]: pam_unix(sshd:session): session closed for user core May 14 23:41:15.863089 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:57688.service: Deactivated successfully. May 14 23:41:15.865631 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:41:15.865857 systemd[1]: session-7.scope: Consumed 5.610s CPU time, 238.3M memory peak. May 14 23:41:15.867084 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. May 14 23:41:15.867892 systemd-logind[1486]: Removed session 7. May 14 23:41:16.328831 kubelet[2727]: E0514 23:41:16.328786 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:20.678111 kubelet[2727]: E0514 23:41:20.678027 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:21.514476 kubelet[2727]: E0514 23:41:21.514425 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:21.840057 kubelet[2727]: E0514 23:41:21.839997 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:22.516010 kubelet[2727]: E0514 23:41:22.515957 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:24.931249 kubelet[2727]: I0514 23:41:24.931166 2727 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:41:24.931933 containerd[1499]: time="2025-05-14T23:41:24.931661056Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:41:24.932285 kubelet[2727]: I0514 23:41:24.932003 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:41:25.765559 update_engine[1490]: I20250514 23:41:25.765445 1490 update_attempter.cc:509] Updating boot flags... May 14 23:41:25.801563 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2823) May 14 23:41:25.848589 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2823) May 14 23:41:25.864205 kubelet[2727]: I0514 23:41:25.858400 2727 topology_manager.go:215] "Topology Admit Handler" podUID="e4ed74af-176e-411b-8b73-cf1552249221" podNamespace="kube-system" podName="kube-proxy-nv7pm" May 14 23:41:25.866614 systemd[1]: Created slice kubepods-besteffort-pode4ed74af_176e_411b_8b73_cf1552249221.slice - libcontainer container kubepods-besteffort-pode4ed74af_176e_411b_8b73_cf1552249221.slice. May 14 23:41:25.904508 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2823) May 14 23:41:25.959220 kubelet[2727]: I0514 23:41:25.958764 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4ed74af-176e-411b-8b73-cf1552249221-kube-proxy\") pod \"kube-proxy-nv7pm\" (UID: \"e4ed74af-176e-411b-8b73-cf1552249221\") " pod="kube-system/kube-proxy-nv7pm" May 14 23:41:25.959220 kubelet[2727]: I0514 23:41:25.958804 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4ed74af-176e-411b-8b73-cf1552249221-xtables-lock\") pod \"kube-proxy-nv7pm\" (UID: \"e4ed74af-176e-411b-8b73-cf1552249221\") " pod="kube-system/kube-proxy-nv7pm" May 14 23:41:25.959220 kubelet[2727]: I0514 23:41:25.958821 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4ed74af-176e-411b-8b73-cf1552249221-lib-modules\") pod \"kube-proxy-nv7pm\" (UID: \"e4ed74af-176e-411b-8b73-cf1552249221\") " pod="kube-system/kube-proxy-nv7pm" May 14 23:41:25.959220 kubelet[2727]: I0514 23:41:25.958845 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skh2h\" (UniqueName: \"kubernetes.io/projected/e4ed74af-176e-411b-8b73-cf1552249221-kube-api-access-skh2h\") pod \"kube-proxy-nv7pm\" (UID: \"e4ed74af-176e-411b-8b73-cf1552249221\") " pod="kube-system/kube-proxy-nv7pm" May 14 23:41:25.974599 kubelet[2727]: I0514 23:41:25.974357 2727 topology_manager.go:215] "Topology Admit Handler" podUID="4ba5f0f2-fb90-469a-bae5-fd0eaa7d84dc" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-lbmdx" May 14 23:41:25.990450 systemd[1]: Created slice kubepods-besteffort-pod4ba5f0f2_fb90_469a_bae5_fd0eaa7d84dc.slice - libcontainer container kubepods-besteffort-pod4ba5f0f2_fb90_469a_bae5_fd0eaa7d84dc.slice. May 14 23:41:26.160880 kubelet[2727]: I0514 23:41:26.160783 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ba5f0f2-fb90-469a-bae5-fd0eaa7d84dc-var-lib-calico\") pod \"tigera-operator-797db67f8-lbmdx\" (UID: \"4ba5f0f2-fb90-469a-bae5-fd0eaa7d84dc\") " pod="tigera-operator/tigera-operator-797db67f8-lbmdx" May 14 23:41:26.160880 kubelet[2727]: I0514 23:41:26.160854 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzn9\" (UniqueName: \"kubernetes.io/projected/4ba5f0f2-fb90-469a-bae5-fd0eaa7d84dc-kube-api-access-8nzn9\") pod \"tigera-operator-797db67f8-lbmdx\" (UID: \"4ba5f0f2-fb90-469a-bae5-fd0eaa7d84dc\") " pod="tigera-operator/tigera-operator-797db67f8-lbmdx" May 14 23:41:26.184134 kubelet[2727]: E0514 23:41:26.184057 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:26.184916 containerd[1499]: time="2025-05-14T23:41:26.184864472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nv7pm,Uid:e4ed74af-176e-411b-8b73-cf1552249221,Namespace:kube-system,Attempt:0,}" May 14 23:41:26.333262 kubelet[2727]: E0514 23:41:26.333200 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:26.376337 containerd[1499]: time="2025-05-14T23:41:26.376268653Z" level=info msg="connecting to shim ee1767c746b8b3faec6f2ea80e1533b2b2601cd5c73218aa77da4584c762361d" address="unix:///run/containerd/s/0d9621538648c268e556d5e31cc0cee258b803efb42cdf2320f4ac3e84892d6e" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:26.421675 systemd[1]: Started cri-containerd-ee1767c746b8b3faec6f2ea80e1533b2b2601cd5c73218aa77da4584c762361d.scope - libcontainer container ee1767c746b8b3faec6f2ea80e1533b2b2601cd5c73218aa77da4584c762361d. May 14 23:41:26.453629 containerd[1499]: time="2025-05-14T23:41:26.453558402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nv7pm,Uid:e4ed74af-176e-411b-8b73-cf1552249221,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee1767c746b8b3faec6f2ea80e1533b2b2601cd5c73218aa77da4584c762361d\"" May 14 23:41:26.454349 kubelet[2727]: E0514 23:41:26.454324 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:26.456200 containerd[1499]: time="2025-05-14T23:41:26.456159247Z" level=info msg="CreateContainer within sandbox \"ee1767c746b8b3faec6f2ea80e1533b2b2601cd5c73218aa77da4584c762361d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:41:26.469020 containerd[1499]: time="2025-05-14T23:41:26.468962538Z" level=info msg="Container ad79f9c72f5a5a57c3a02978322c03199181496ec117ee858ae674a13e656d30: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:26.479756 containerd[1499]: time="2025-05-14T23:41:26.479710988Z" level=info msg="CreateContainer within sandbox \"ee1767c746b8b3faec6f2ea80e1533b2b2601cd5c73218aa77da4584c762361d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad79f9c72f5a5a57c3a02978322c03199181496ec117ee858ae674a13e656d30\"" May 14 23:41:26.481302 containerd[1499]: time="2025-05-14T23:41:26.480263634Z" level=info msg="StartContainer for \"ad79f9c72f5a5a57c3a02978322c03199181496ec117ee858ae674a13e656d30\"" May 14 23:41:26.481711 containerd[1499]: time="2025-05-14T23:41:26.481685529Z" level=info msg="connecting to shim ad79f9c72f5a5a57c3a02978322c03199181496ec117ee858ae674a13e656d30" address="unix:///run/containerd/s/0d9621538648c268e556d5e31cc0cee258b803efb42cdf2320f4ac3e84892d6e" protocol=ttrpc version=3 May 14 23:41:26.504620 systemd[1]: Started cri-containerd-ad79f9c72f5a5a57c3a02978322c03199181496ec117ee858ae674a13e656d30.scope - libcontainer container ad79f9c72f5a5a57c3a02978322c03199181496ec117ee858ae674a13e656d30. May 14 23:41:26.526679 kubelet[2727]: E0514 23:41:26.526634 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:26.551882 containerd[1499]: time="2025-05-14T23:41:26.551839331Z" level=info msg="StartContainer for \"ad79f9c72f5a5a57c3a02978322c03199181496ec117ee858ae674a13e656d30\" returns successfully" May 14 23:41:26.593907 containerd[1499]: time="2025-05-14T23:41:26.593852991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-lbmdx,Uid:4ba5f0f2-fb90-469a-bae5-fd0eaa7d84dc,Namespace:tigera-operator,Attempt:0,}" May 14 23:41:26.616753 containerd[1499]: time="2025-05-14T23:41:26.616676306Z" level=info msg="connecting to shim 7b357dd60373dac1f6dc9c790c55efa5448a6d0eb208eb52fec0ac8525725203" address="unix:///run/containerd/s/916321bdcb26ca99fe2b8230c8e617bfa0506ca9db49b37f75b982e7d61ca398" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:26.650639 systemd[1]: Started cri-containerd-7b357dd60373dac1f6dc9c790c55efa5448a6d0eb208eb52fec0ac8525725203.scope - libcontainer container 7b357dd60373dac1f6dc9c790c55efa5448a6d0eb208eb52fec0ac8525725203. May 14 23:41:26.699208 containerd[1499]: time="2025-05-14T23:41:26.699049274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-lbmdx,Uid:4ba5f0f2-fb90-469a-bae5-fd0eaa7d84dc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7b357dd60373dac1f6dc9c790c55efa5448a6d0eb208eb52fec0ac8525725203\"" May 14 23:41:26.701144 containerd[1499]: time="2025-05-14T23:41:26.701101571Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 23:41:27.531315 kubelet[2727]: E0514 23:41:27.531263 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:27.541818 kubelet[2727]: I0514 23:41:27.541743 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nv7pm" podStartSLOduration=2.5417219639999997 podStartE2EDuration="2.541721964s" podCreationTimestamp="2025-05-14 23:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:41:27.541208782 +0000 UTC m=+16.130311183" watchObservedRunningTime="2025-05-14 23:41:27.541721964 +0000 UTC m=+16.130824365" May 14 23:41:28.265813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22812539.mount: Deactivated successfully. May 14 23:41:28.532354 kubelet[2727]: E0514 23:41:28.532313 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:28.979247 containerd[1499]: time="2025-05-14T23:41:28.979070476Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:28.979820 containerd[1499]: time="2025-05-14T23:41:28.979751052Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 23:41:28.980883 containerd[1499]: time="2025-05-14T23:41:28.980818733Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:28.982740 containerd[1499]: time="2025-05-14T23:41:28.982703527Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:28.983244 containerd[1499]: time="2025-05-14T23:41:28.983211379Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.2820577s" May 14 23:41:28.983282 containerd[1499]: time="2025-05-14T23:41:28.983244030Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 23:41:28.985143 containerd[1499]: time="2025-05-14T23:41:28.985062650Z" level=info msg="CreateContainer within sandbox \"7b357dd60373dac1f6dc9c790c55efa5448a6d0eb208eb52fec0ac8525725203\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 23:41:28.995151 containerd[1499]: time="2025-05-14T23:41:28.994622933Z" level=info msg="Container 8fb01a34b939def17d4ff66570a6246fd690ab2f1c042099896c1129960986a0: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:29.002493 containerd[1499]: time="2025-05-14T23:41:29.002434349Z" level=info msg="CreateContainer within sandbox \"7b357dd60373dac1f6dc9c790c55efa5448a6d0eb208eb52fec0ac8525725203\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8fb01a34b939def17d4ff66570a6246fd690ab2f1c042099896c1129960986a0\"" May 14 23:41:29.003087 containerd[1499]: time="2025-05-14T23:41:29.003039924Z" level=info msg="StartContainer for \"8fb01a34b939def17d4ff66570a6246fd690ab2f1c042099896c1129960986a0\"" May 14 23:41:29.004018 containerd[1499]: time="2025-05-14T23:41:29.003979625Z" level=info msg="connecting to shim 8fb01a34b939def17d4ff66570a6246fd690ab2f1c042099896c1129960986a0" address="unix:///run/containerd/s/916321bdcb26ca99fe2b8230c8e617bfa0506ca9db49b37f75b982e7d61ca398" protocol=ttrpc version=3 May 14 23:41:29.030775 systemd[1]: Started cri-containerd-8fb01a34b939def17d4ff66570a6246fd690ab2f1c042099896c1129960986a0.scope - libcontainer container 8fb01a34b939def17d4ff66570a6246fd690ab2f1c042099896c1129960986a0. May 14 23:41:29.064786 containerd[1499]: time="2025-05-14T23:41:29.064732148Z" level=info msg="StartContainer for \"8fb01a34b939def17d4ff66570a6246fd690ab2f1c042099896c1129960986a0\" returns successfully" May 14 23:41:29.543694 kubelet[2727]: I0514 23:41:29.543602 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-lbmdx" podStartSLOduration=2.260215645 podStartE2EDuration="4.543572421s" podCreationTimestamp="2025-05-14 23:41:25 +0000 UTC" firstStartedPulling="2025-05-14 23:41:26.700600602 +0000 UTC m=+15.289703003" lastFinishedPulling="2025-05-14 23:41:28.983957378 +0000 UTC m=+17.573059779" observedRunningTime="2025-05-14 23:41:29.543456794 +0000 UTC m=+18.132559205" watchObservedRunningTime="2025-05-14 23:41:29.543572421 +0000 UTC m=+18.132674822" May 14 23:41:31.963732 kubelet[2727]: I0514 23:41:31.963580 2727 topology_manager.go:215] "Topology Admit Handler" podUID="dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc" podNamespace="calico-system" podName="calico-typha-54cd8c47bb-47h5q" May 14 23:41:31.985530 systemd[1]: Created slice kubepods-besteffort-poddcf38b1e_0b25_4d8c_abcc_f5f9fccebacc.slice - libcontainer container kubepods-besteffort-poddcf38b1e_0b25_4d8c_abcc_f5f9fccebacc.slice. May 14 23:41:32.015101 kubelet[2727]: I0514 23:41:32.015043 2727 topology_manager.go:215] "Topology Admit Handler" podUID="93b0c8f0-1931-40c5-866a-07ca434e5d33" podNamespace="calico-system" podName="calico-node-jtcp9" May 14 23:41:32.026386 systemd[1]: Created slice kubepods-besteffort-pod93b0c8f0_1931_40c5_866a_07ca434e5d33.slice - libcontainer container kubepods-besteffort-pod93b0c8f0_1931_40c5_866a_07ca434e5d33.slice. May 14 23:41:32.093817 kubelet[2727]: I0514 23:41:32.093767 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc-tigera-ca-bundle\") pod \"calico-typha-54cd8c47bb-47h5q\" (UID: \"dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc\") " pod="calico-system/calico-typha-54cd8c47bb-47h5q" May 14 23:41:32.093817 kubelet[2727]: I0514 23:41:32.093821 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vd7\" (UniqueName: \"kubernetes.io/projected/dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc-kube-api-access-g9vd7\") pod \"calico-typha-54cd8c47bb-47h5q\" (UID: \"dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc\") " pod="calico-system/calico-typha-54cd8c47bb-47h5q" May 14 23:41:32.094041 kubelet[2727]: I0514 23:41:32.093844 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc-typha-certs\") pod \"calico-typha-54cd8c47bb-47h5q\" (UID: \"dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc\") " pod="calico-system/calico-typha-54cd8c47bb-47h5q" May 14 23:41:32.128540 kubelet[2727]: I0514 23:41:32.126440 2727 topology_manager.go:215] "Topology Admit Handler" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" podNamespace="calico-system" podName="csi-node-driver-6wg62" May 14 23:41:32.128540 kubelet[2727]: E0514 23:41:32.127614 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6wg62" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" May 14 23:41:32.194362 kubelet[2727]: I0514 23:41:32.194255 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-var-run-calico\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194362 kubelet[2727]: I0514 23:41:32.194301 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-var-lib-calico\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194362 kubelet[2727]: I0514 23:41:32.194340 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93b0c8f0-1931-40c5-866a-07ca434e5d33-tigera-ca-bundle\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194362 kubelet[2727]: I0514 23:41:32.194355 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-cni-net-dir\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194362 kubelet[2727]: I0514 23:41:32.194380 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-policysync\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194704 kubelet[2727]: I0514 23:41:32.194394 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-cni-bin-dir\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194704 kubelet[2727]: I0514 23:41:32.194647 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-cni-log-dir\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194871 kubelet[2727]: I0514 23:41:32.194740 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xf8z\" (UniqueName: \"kubernetes.io/projected/93b0c8f0-1931-40c5-866a-07ca434e5d33-kube-api-access-7xf8z\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194871 kubelet[2727]: I0514 23:41:32.194793 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-lib-modules\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194871 kubelet[2727]: I0514 23:41:32.194810 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-flexvol-driver-host\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194871 kubelet[2727]: I0514 23:41:32.194825 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93b0c8f0-1931-40c5-866a-07ca434e5d33-xtables-lock\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.194871 kubelet[2727]: I0514 23:41:32.194841 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/93b0c8f0-1931-40c5-866a-07ca434e5d33-node-certs\") pod \"calico-node-jtcp9\" (UID: \"93b0c8f0-1931-40c5-866a-07ca434e5d33\") " pod="calico-system/calico-node-jtcp9" May 14 23:41:32.291366 kubelet[2727]: E0514 23:41:32.290998 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:32.291839 containerd[1499]: time="2025-05-14T23:41:32.291765242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54cd8c47bb-47h5q,Uid:dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc,Namespace:calico-system,Attempt:0,}" May 14 23:41:32.295173 kubelet[2727]: I0514 23:41:32.295125 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmkxx\" (UniqueName: \"kubernetes.io/projected/2231288d-40de-4b1d-a5cc-5c1b3be4909b-kube-api-access-rmkxx\") pod \"csi-node-driver-6wg62\" (UID: \"2231288d-40de-4b1d-a5cc-5c1b3be4909b\") " pod="calico-system/csi-node-driver-6wg62" May 14 23:41:32.295384 kubelet[2727]: I0514 23:41:32.295208 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2231288d-40de-4b1d-a5cc-5c1b3be4909b-socket-dir\") pod \"csi-node-driver-6wg62\" (UID: \"2231288d-40de-4b1d-a5cc-5c1b3be4909b\") " pod="calico-system/csi-node-driver-6wg62" May 14 23:41:32.295384 kubelet[2727]: I0514 23:41:32.295276 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2231288d-40de-4b1d-a5cc-5c1b3be4909b-kubelet-dir\") pod \"csi-node-driver-6wg62\" (UID: \"2231288d-40de-4b1d-a5cc-5c1b3be4909b\") " pod="calico-system/csi-node-driver-6wg62" May 14 23:41:32.295384 kubelet[2727]: I0514 23:41:32.295339 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2231288d-40de-4b1d-a5cc-5c1b3be4909b-registration-dir\") pod \"csi-node-driver-6wg62\" (UID: \"2231288d-40de-4b1d-a5cc-5c1b3be4909b\") " pod="calico-system/csi-node-driver-6wg62" May 14 23:41:32.295384 kubelet[2727]: I0514 23:41:32.295377 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2231288d-40de-4b1d-a5cc-5c1b3be4909b-varrun\") pod \"csi-node-driver-6wg62\" (UID: \"2231288d-40de-4b1d-a5cc-5c1b3be4909b\") " pod="calico-system/csi-node-driver-6wg62" May 14 23:41:32.297309 kubelet[2727]: E0514 23:41:32.297266 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.297309 kubelet[2727]: W0514 23:41:32.297297 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.297417 kubelet[2727]: E0514 23:41:32.297326 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.298153 kubelet[2727]: E0514 23:41:32.298133 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.298153 kubelet[2727]: W0514 23:41:32.298150 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.298287 kubelet[2727]: E0514 23:41:32.298173 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.298531 kubelet[2727]: E0514 23:41:32.298505 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.298531 kubelet[2727]: W0514 23:41:32.298529 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.298644 kubelet[2727]: E0514 23:41:32.298620 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.298863 kubelet[2727]: E0514 23:41:32.298842 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.298863 kubelet[2727]: W0514 23:41:32.298858 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.299117 kubelet[2727]: E0514 23:41:32.298990 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.299159 kubelet[2727]: E0514 23:41:32.299143 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.299159 kubelet[2727]: W0514 23:41:32.299152 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.299282 kubelet[2727]: E0514 23:41:32.299251 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.299624 kubelet[2727]: E0514 23:41:32.299564 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.300363 kubelet[2727]: W0514 23:41:32.299595 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.300363 kubelet[2727]: E0514 23:41:32.299768 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.300363 kubelet[2727]: E0514 23:41:32.300044 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.300363 kubelet[2727]: W0514 23:41:32.300055 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.300363 kubelet[2727]: E0514 23:41:32.300100 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.302711 kubelet[2727]: E0514 23:41:32.302651 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.302785 kubelet[2727]: W0514 23:41:32.302721 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.302810 kubelet[2727]: E0514 23:41:32.302761 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.303350 kubelet[2727]: E0514 23:41:32.303322 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.303350 kubelet[2727]: W0514 23:41:32.303346 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.303538 kubelet[2727]: E0514 23:41:32.303383 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.303754 kubelet[2727]: E0514 23:41:32.303729 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.303754 kubelet[2727]: W0514 23:41:32.303750 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.303817 kubelet[2727]: E0514 23:41:32.303788 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.304088 kubelet[2727]: E0514 23:41:32.304061 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.304088 kubelet[2727]: W0514 23:41:32.304082 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.304157 kubelet[2727]: E0514 23:41:32.304120 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.304372 kubelet[2727]: E0514 23:41:32.304352 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.304372 kubelet[2727]: W0514 23:41:32.304370 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.304465 kubelet[2727]: E0514 23:41:32.304442 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.305368 kubelet[2727]: E0514 23:41:32.305326 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.305368 kubelet[2727]: W0514 23:41:32.305343 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.305459 kubelet[2727]: E0514 23:41:32.305379 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.306156 kubelet[2727]: E0514 23:41:32.305578 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.306156 kubelet[2727]: W0514 23:41:32.305637 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.306156 kubelet[2727]: E0514 23:41:32.305717 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.306156 kubelet[2727]: E0514 23:41:32.306086 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.306156 kubelet[2727]: W0514 23:41:32.306095 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.306156 kubelet[2727]: E0514 23:41:32.306155 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.307293 kubelet[2727]: E0514 23:41:32.306587 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.307293 kubelet[2727]: W0514 23:41:32.306601 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.307293 kubelet[2727]: E0514 23:41:32.306689 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.307293 kubelet[2727]: E0514 23:41:32.306931 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.307293 kubelet[2727]: W0514 23:41:32.306939 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.307293 kubelet[2727]: E0514 23:41:32.307072 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.307461 kubelet[2727]: E0514 23:41:32.307319 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.307461 kubelet[2727]: W0514 23:41:32.307327 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.307461 kubelet[2727]: E0514 23:41:32.307443 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.307706 kubelet[2727]: E0514 23:41:32.307687 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.307706 kubelet[2727]: W0514 23:41:32.307701 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.307828 kubelet[2727]: E0514 23:41:32.307783 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.309534 kubelet[2727]: E0514 23:41:32.307940 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.309534 kubelet[2727]: W0514 23:41:32.307955 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.309534 kubelet[2727]: E0514 23:41:32.308044 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.309534 kubelet[2727]: E0514 23:41:32.308778 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.309534 kubelet[2727]: W0514 23:41:32.308787 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.309534 kubelet[2727]: E0514 23:41:32.308797 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.309720 kubelet[2727]: E0514 23:41:32.309546 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.309720 kubelet[2727]: W0514 23:41:32.309556 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.309720 kubelet[2727]: E0514 23:41:32.309566 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.313876 kubelet[2727]: E0514 23:41:32.313828 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.314600 kubelet[2727]: W0514 23:41:32.314565 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.314600 kubelet[2727]: E0514 23:41:32.314597 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.326863 containerd[1499]: time="2025-05-14T23:41:32.326800156Z" level=info msg="connecting to shim 9b5b450dd5f409f906dc6b35f57ec0cacb7074932847d22377fc3d0443f9cafc" address="unix:///run/containerd/s/716e075c8a933ce677ec5fc0c476b8d45fccca1cb9aa0d7d084a0015b101a490" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:32.330272 kubelet[2727]: E0514 23:41:32.330221 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:32.330927 containerd[1499]: time="2025-05-14T23:41:32.330757035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jtcp9,Uid:93b0c8f0-1931-40c5-866a-07ca434e5d33,Namespace:calico-system,Attempt:0,}" May 14 23:41:32.356854 systemd[1]: Started cri-containerd-9b5b450dd5f409f906dc6b35f57ec0cacb7074932847d22377fc3d0443f9cafc.scope - libcontainer container 9b5b450dd5f409f906dc6b35f57ec0cacb7074932847d22377fc3d0443f9cafc. May 14 23:41:32.397040 kubelet[2727]: E0514 23:41:32.396987 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.397040 kubelet[2727]: W0514 23:41:32.397017 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.397040 kubelet[2727]: E0514 23:41:32.397054 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.397371 kubelet[2727]: E0514 23:41:32.397353 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.397371 kubelet[2727]: W0514 23:41:32.397367 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.397430 kubelet[2727]: E0514 23:41:32.397383 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.397659 kubelet[2727]: E0514 23:41:32.397640 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.397685 kubelet[2727]: W0514 23:41:32.397658 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.397685 kubelet[2727]: E0514 23:41:32.397675 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.397999 kubelet[2727]: E0514 23:41:32.397980 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.397999 kubelet[2727]: W0514 23:41:32.397996 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.398082 kubelet[2727]: E0514 23:41:32.398012 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.398375 kubelet[2727]: E0514 23:41:32.398291 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.398375 kubelet[2727]: W0514 23:41:32.398332 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.398375 kubelet[2727]: E0514 23:41:32.398349 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.401642 kubelet[2727]: E0514 23:41:32.401604 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.401975 kubelet[2727]: W0514 23:41:32.401787 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.401975 kubelet[2727]: E0514 23:41:32.401905 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.402281 kubelet[2727]: E0514 23:41:32.402169 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.402281 kubelet[2727]: W0514 23:41:32.402181 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.402281 kubelet[2727]: E0514 23:41:32.402240 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.402624 kubelet[2727]: E0514 23:41:32.402467 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.402624 kubelet[2727]: W0514 23:41:32.402513 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.402624 kubelet[2727]: E0514 23:41:32.402536 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.404152 kubelet[2727]: E0514 23:41:32.402920 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.404152 kubelet[2727]: W0514 23:41:32.402938 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.404152 kubelet[2727]: E0514 23:41:32.402964 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.404152 kubelet[2727]: E0514 23:41:32.403309 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.404152 kubelet[2727]: W0514 23:41:32.403320 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.404152 kubelet[2727]: E0514 23:41:32.403381 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.404152 kubelet[2727]: E0514 23:41:32.403758 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.404152 kubelet[2727]: W0514 23:41:32.403770 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.404152 kubelet[2727]: E0514 23:41:32.403817 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.404385 kubelet[2727]: E0514 23:41:32.404229 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.404385 kubelet[2727]: W0514 23:41:32.404241 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.404385 kubelet[2727]: E0514 23:41:32.404294 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.404712 kubelet[2727]: E0514 23:41:32.404618 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.404712 kubelet[2727]: W0514 23:41:32.404637 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.404712 kubelet[2727]: E0514 23:41:32.404652 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.404931 kubelet[2727]: E0514 23:41:32.404905 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.404931 kubelet[2727]: W0514 23:41:32.404923 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.405041 kubelet[2727]: E0514 23:41:32.404939 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.405291 kubelet[2727]: E0514 23:41:32.405262 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.405291 kubelet[2727]: W0514 23:41:32.405280 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.405463 kubelet[2727]: E0514 23:41:32.405437 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.405635 kubelet[2727]: E0514 23:41:32.405609 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.405635 kubelet[2727]: W0514 23:41:32.405627 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.405754 kubelet[2727]: E0514 23:41:32.405644 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.406006 kubelet[2727]: E0514 23:41:32.405987 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.406054 kubelet[2727]: W0514 23:41:32.406003 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.406080 kubelet[2727]: E0514 23:41:32.406058 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.406435 kubelet[2727]: E0514 23:41:32.406413 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.406435 kubelet[2727]: W0514 23:41:32.406432 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.406519 kubelet[2727]: E0514 23:41:32.406449 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.407742 kubelet[2727]: E0514 23:41:32.406707 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.407742 kubelet[2727]: W0514 23:41:32.406747 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.407742 kubelet[2727]: E0514 23:41:32.406766 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.407742 kubelet[2727]: E0514 23:41:32.407031 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.407742 kubelet[2727]: W0514 23:41:32.407042 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.407742 kubelet[2727]: E0514 23:41:32.407058 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.407742 kubelet[2727]: E0514 23:41:32.407330 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.407742 kubelet[2727]: W0514 23:41:32.407342 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.407742 kubelet[2727]: E0514 23:41:32.407369 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.407742 kubelet[2727]: E0514 23:41:32.407657 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.408009 kubelet[2727]: W0514 23:41:32.407667 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.408009 kubelet[2727]: E0514 23:41:32.407679 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.408175 kubelet[2727]: E0514 23:41:32.408149 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.408175 kubelet[2727]: W0514 23:41:32.408170 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.408257 kubelet[2727]: E0514 23:41:32.408188 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.408516 kubelet[2727]: E0514 23:41:32.408429 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.408516 kubelet[2727]: W0514 23:41:32.408447 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.408585 kubelet[2727]: E0514 23:41:32.408461 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.408840 kubelet[2727]: E0514 23:41:32.408808 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.408840 kubelet[2727]: W0514 23:41:32.408829 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.408840 kubelet[2727]: E0514 23:41:32.408841 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.415287 containerd[1499]: time="2025-05-14T23:41:32.415135331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54cd8c47bb-47h5q,Uid:dcf38b1e-0b25-4d8c-abcc-f5f9fccebacc,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b5b450dd5f409f906dc6b35f57ec0cacb7074932847d22377fc3d0443f9cafc\"" May 14 23:41:32.415922 kubelet[2727]: E0514 23:41:32.415895 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:32.417247 containerd[1499]: time="2025-05-14T23:41:32.416699803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 23:41:32.417307 kubelet[2727]: E0514 23:41:32.417257 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:32.417307 kubelet[2727]: W0514 23:41:32.417273 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:32.417307 kubelet[2727]: E0514 23:41:32.417292 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:32.433704 containerd[1499]: time="2025-05-14T23:41:32.433651489Z" level=info msg="connecting to shim e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8" address="unix:///run/containerd/s/51ff5a75e47ed8727f85b89ec2218657c9e5fb27f581a0c2e8cb92e1c7dc654e" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:32.462636 systemd[1]: Started cri-containerd-e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8.scope - libcontainer container e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8. May 14 23:41:32.518335 containerd[1499]: time="2025-05-14T23:41:32.518260687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jtcp9,Uid:93b0c8f0-1931-40c5-866a-07ca434e5d33,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8\"" May 14 23:41:32.519127 kubelet[2727]: E0514 23:41:32.519079 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:33.491935 kubelet[2727]: E0514 23:41:33.491856 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6wg62" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" May 14 23:41:35.219396 containerd[1499]: time="2025-05-14T23:41:35.219318185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:35.220064 containerd[1499]: time="2025-05-14T23:41:35.219999062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 23:41:35.221309 containerd[1499]: time="2025-05-14T23:41:35.221269425Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:35.223684 containerd[1499]: time="2025-05-14T23:41:35.223650349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:35.224373 containerd[1499]: time="2025-05-14T23:41:35.224326808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.807585476s" May 14 23:41:35.224373 containerd[1499]: time="2025-05-14T23:41:35.224367214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 23:41:35.225281 containerd[1499]: time="2025-05-14T23:41:35.225254818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 23:41:35.235326 containerd[1499]: time="2025-05-14T23:41:35.235287892Z" level=info msg="CreateContainer within sandbox \"9b5b450dd5f409f906dc6b35f57ec0cacb7074932847d22377fc3d0443f9cafc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 23:41:35.244238 containerd[1499]: time="2025-05-14T23:41:35.244191369Z" level=info msg="Container 0e0f0b32d5b378889a6c1ba566f4c824e21c95e692cd71054886bd7cde1117d5: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:35.253257 containerd[1499]: time="2025-05-14T23:41:35.253211574Z" level=info msg="CreateContainer within sandbox \"9b5b450dd5f409f906dc6b35f57ec0cacb7074932847d22377fc3d0443f9cafc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0e0f0b32d5b378889a6c1ba566f4c824e21c95e692cd71054886bd7cde1117d5\"" May 14 23:41:35.253756 containerd[1499]: time="2025-05-14T23:41:35.253713906Z" level=info msg="StartContainer for \"0e0f0b32d5b378889a6c1ba566f4c824e21c95e692cd71054886bd7cde1117d5\"" May 14 23:41:35.254758 containerd[1499]: time="2025-05-14T23:41:35.254723880Z" level=info msg="connecting to shim 0e0f0b32d5b378889a6c1ba566f4c824e21c95e692cd71054886bd7cde1117d5" address="unix:///run/containerd/s/716e075c8a933ce677ec5fc0c476b8d45fccca1cb9aa0d7d084a0015b101a490" protocol=ttrpc version=3 May 14 23:41:35.280829 systemd[1]: Started cri-containerd-0e0f0b32d5b378889a6c1ba566f4c824e21c95e692cd71054886bd7cde1117d5.scope - libcontainer container 0e0f0b32d5b378889a6c1ba566f4c824e21c95e692cd71054886bd7cde1117d5. May 14 23:41:35.434039 containerd[1499]: time="2025-05-14T23:41:35.433987722Z" level=info msg="StartContainer for \"0e0f0b32d5b378889a6c1ba566f4c824e21c95e692cd71054886bd7cde1117d5\" returns successfully" May 14 23:41:35.491329 kubelet[2727]: E0514 23:41:35.491155 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6wg62" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" May 14 23:41:35.548371 kubelet[2727]: E0514 23:41:35.548314 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:35.616013 kubelet[2727]: E0514 23:41:35.615975 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.616013 kubelet[2727]: W0514 23:41:35.616002 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.616229 kubelet[2727]: E0514 23:41:35.616029 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.616363 kubelet[2727]: E0514 23:41:35.616338 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.616363 kubelet[2727]: W0514 23:41:35.616359 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.616437 kubelet[2727]: E0514 23:41:35.616379 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.616659 kubelet[2727]: E0514 23:41:35.616642 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.616659 kubelet[2727]: W0514 23:41:35.616654 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.616735 kubelet[2727]: E0514 23:41:35.616664 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.616917 kubelet[2727]: E0514 23:41:35.616899 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.616917 kubelet[2727]: W0514 23:41:35.616911 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.616917 kubelet[2727]: E0514 23:41:35.616920 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.617356 kubelet[2727]: E0514 23:41:35.617209 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.617356 kubelet[2727]: W0514 23:41:35.617226 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.617356 kubelet[2727]: E0514 23:41:35.617240 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.617659 kubelet[2727]: E0514 23:41:35.617626 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.617659 kubelet[2727]: W0514 23:41:35.617638 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.617659 kubelet[2727]: E0514 23:41:35.617648 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.617858 kubelet[2727]: E0514 23:41:35.617829 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.617858 kubelet[2727]: W0514 23:41:35.617839 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.617858 kubelet[2727]: E0514 23:41:35.617850 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.618145 kubelet[2727]: E0514 23:41:35.618128 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.618145 kubelet[2727]: W0514 23:41:35.618141 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.618206 kubelet[2727]: E0514 23:41:35.618151 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.618520 kubelet[2727]: E0514 23:41:35.618504 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.618520 kubelet[2727]: W0514 23:41:35.618517 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.618586 kubelet[2727]: E0514 23:41:35.618527 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.618731 kubelet[2727]: E0514 23:41:35.618718 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.618731 kubelet[2727]: W0514 23:41:35.618728 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.618782 kubelet[2727]: E0514 23:41:35.618736 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.618953 kubelet[2727]: E0514 23:41:35.618930 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.618953 kubelet[2727]: W0514 23:41:35.618941 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.618998 kubelet[2727]: E0514 23:41:35.618961 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.619160 kubelet[2727]: E0514 23:41:35.619145 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.619160 kubelet[2727]: W0514 23:41:35.619156 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.619219 kubelet[2727]: E0514 23:41:35.619165 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.619358 kubelet[2727]: E0514 23:41:35.619344 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.619358 kubelet[2727]: W0514 23:41:35.619354 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.619400 kubelet[2727]: E0514 23:41:35.619362 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.619582 kubelet[2727]: E0514 23:41:35.619568 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.619582 kubelet[2727]: W0514 23:41:35.619579 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.619643 kubelet[2727]: E0514 23:41:35.619589 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.619776 kubelet[2727]: E0514 23:41:35.619763 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.619776 kubelet[2727]: W0514 23:41:35.619773 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.619820 kubelet[2727]: E0514 23:41:35.619780 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.627020 kubelet[2727]: E0514 23:41:35.626991 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.627020 kubelet[2727]: W0514 23:41:35.627009 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.627020 kubelet[2727]: E0514 23:41:35.627021 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.627289 kubelet[2727]: E0514 23:41:35.627262 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.627289 kubelet[2727]: W0514 23:41:35.627277 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.627335 kubelet[2727]: E0514 23:41:35.627292 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.627564 kubelet[2727]: E0514 23:41:35.627538 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.627564 kubelet[2727]: W0514 23:41:35.627553 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.627610 kubelet[2727]: E0514 23:41:35.627568 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.627844 kubelet[2727]: E0514 23:41:35.627820 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.627844 kubelet[2727]: W0514 23:41:35.627835 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.627905 kubelet[2727]: E0514 23:41:35.627850 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.628114 kubelet[2727]: E0514 23:41:35.628092 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.628114 kubelet[2727]: W0514 23:41:35.628107 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.628165 kubelet[2727]: E0514 23:41:35.628122 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.628342 kubelet[2727]: E0514 23:41:35.628326 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.628342 kubelet[2727]: W0514 23:41:35.628340 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.628394 kubelet[2727]: E0514 23:41:35.628373 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.628582 kubelet[2727]: E0514 23:41:35.628567 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.628582 kubelet[2727]: W0514 23:41:35.628581 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.628626 kubelet[2727]: E0514 23:41:35.628612 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.628815 kubelet[2727]: E0514 23:41:35.628800 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.628840 kubelet[2727]: W0514 23:41:35.628813 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.628862 kubelet[2727]: E0514 23:41:35.628847 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.629065 kubelet[2727]: E0514 23:41:35.629042 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.629065 kubelet[2727]: W0514 23:41:35.629057 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.629121 kubelet[2727]: E0514 23:41:35.629073 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.629296 kubelet[2727]: E0514 23:41:35.629280 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.629329 kubelet[2727]: W0514 23:41:35.629295 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.629329 kubelet[2727]: E0514 23:41:35.629313 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.629557 kubelet[2727]: E0514 23:41:35.629542 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.629557 kubelet[2727]: W0514 23:41:35.629556 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.629601 kubelet[2727]: E0514 23:41:35.629571 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.629805 kubelet[2727]: E0514 23:41:35.629788 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.629829 kubelet[2727]: W0514 23:41:35.629803 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.629829 kubelet[2727]: E0514 23:41:35.629819 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.630057 kubelet[2727]: E0514 23:41:35.630041 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.630057 kubelet[2727]: W0514 23:41:35.630055 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.630104 kubelet[2727]: E0514 23:41:35.630071 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.630292 kubelet[2727]: E0514 23:41:35.630276 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.630292 kubelet[2727]: W0514 23:41:35.630289 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.630335 kubelet[2727]: E0514 23:41:35.630303 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.630597 kubelet[2727]: E0514 23:41:35.630566 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.630597 kubelet[2727]: W0514 23:41:35.630582 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.630597 kubelet[2727]: E0514 23:41:35.630601 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.630872 kubelet[2727]: E0514 23:41:35.630855 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.630872 kubelet[2727]: W0514 23:41:35.630870 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.630929 kubelet[2727]: E0514 23:41:35.630887 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.631104 kubelet[2727]: E0514 23:41:35.631089 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.631104 kubelet[2727]: W0514 23:41:35.631103 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.631170 kubelet[2727]: E0514 23:41:35.631117 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.631352 kubelet[2727]: E0514 23:41:35.631331 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:35.631352 kubelet[2727]: W0514 23:41:35.631345 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:35.631395 kubelet[2727]: E0514 23:41:35.631356 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:35.802362 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:43906.service - OpenSSH per-connection server daemon (10.0.0.1:43906). May 14 23:41:35.864598 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 43906 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:41:35.866719 sshd-session[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:35.872172 systemd-logind[1486]: New session 8 of user core. May 14 23:41:35.880783 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:41:36.020625 sshd[3371]: Connection closed by 10.0.0.1 port 43906 May 14 23:41:36.021104 sshd-session[3369]: pam_unix(sshd:session): session closed for user core May 14 23:41:36.026944 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:43906.service: Deactivated successfully. May 14 23:41:36.029412 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:41:36.030211 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. May 14 23:41:36.031322 systemd-logind[1486]: Removed session 8. May 14 23:41:36.548900 kubelet[2727]: I0514 23:41:36.548851 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:41:36.549442 kubelet[2727]: E0514 23:41:36.549425 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:36.628772 kubelet[2727]: E0514 23:41:36.628728 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.628772 kubelet[2727]: W0514 23:41:36.628757 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.628973 kubelet[2727]: E0514 23:41:36.628785 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.629066 kubelet[2727]: E0514 23:41:36.629039 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.629066 kubelet[2727]: W0514 23:41:36.629057 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.629136 kubelet[2727]: E0514 23:41:36.629073 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.629368 kubelet[2727]: E0514 23:41:36.629353 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.629368 kubelet[2727]: W0514 23:41:36.629365 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.629461 kubelet[2727]: E0514 23:41:36.629376 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.629638 kubelet[2727]: E0514 23:41:36.629617 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.629638 kubelet[2727]: W0514 23:41:36.629630 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.629714 kubelet[2727]: E0514 23:41:36.629641 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.629892 kubelet[2727]: E0514 23:41:36.629871 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.629892 kubelet[2727]: W0514 23:41:36.629884 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.629976 kubelet[2727]: E0514 23:41:36.629895 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.630165 kubelet[2727]: E0514 23:41:36.630149 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.630165 kubelet[2727]: W0514 23:41:36.630161 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.630235 kubelet[2727]: E0514 23:41:36.630171 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.630385 kubelet[2727]: E0514 23:41:36.630370 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.630385 kubelet[2727]: W0514 23:41:36.630381 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.630454 kubelet[2727]: E0514 23:41:36.630391 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.630642 kubelet[2727]: E0514 23:41:36.630625 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.630642 kubelet[2727]: W0514 23:41:36.630639 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.630722 kubelet[2727]: E0514 23:41:36.630650 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.630945 kubelet[2727]: E0514 23:41:36.630910 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.630945 kubelet[2727]: W0514 23:41:36.630935 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.630945 kubelet[2727]: E0514 23:41:36.630946 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.631166 kubelet[2727]: E0514 23:41:36.631151 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.631166 kubelet[2727]: W0514 23:41:36.631163 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.631227 kubelet[2727]: E0514 23:41:36.631173 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.631389 kubelet[2727]: E0514 23:41:36.631375 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.631389 kubelet[2727]: W0514 23:41:36.631387 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.631450 kubelet[2727]: E0514 23:41:36.631397 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.631635 kubelet[2727]: E0514 23:41:36.631620 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.631635 kubelet[2727]: W0514 23:41:36.631632 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.631709 kubelet[2727]: E0514 23:41:36.631643 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.631872 kubelet[2727]: E0514 23:41:36.631859 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.631896 kubelet[2727]: W0514 23:41:36.631871 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.631896 kubelet[2727]: E0514 23:41:36.631881 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.632127 kubelet[2727]: E0514 23:41:36.632109 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.632127 kubelet[2727]: W0514 23:41:36.632125 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.632198 kubelet[2727]: E0514 23:41:36.632137 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.632377 kubelet[2727]: E0514 23:41:36.632364 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.632399 kubelet[2727]: W0514 23:41:36.632376 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.632399 kubelet[2727]: E0514 23:41:36.632386 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.635622 kubelet[2727]: E0514 23:41:36.635599 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.635622 kubelet[2727]: W0514 23:41:36.635614 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.635705 kubelet[2727]: E0514 23:41:36.635625 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.635894 kubelet[2727]: E0514 23:41:36.635873 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.635894 kubelet[2727]: W0514 23:41:36.635887 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.635994 kubelet[2727]: E0514 23:41:36.635902 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.636201 kubelet[2727]: E0514 23:41:36.636176 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.636201 kubelet[2727]: W0514 23:41:36.636194 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.636268 kubelet[2727]: E0514 23:41:36.636210 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.636446 kubelet[2727]: E0514 23:41:36.636424 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.636446 kubelet[2727]: W0514 23:41:36.636436 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.636537 kubelet[2727]: E0514 23:41:36.636450 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.636713 kubelet[2727]: E0514 23:41:36.636698 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.636713 kubelet[2727]: W0514 23:41:36.636710 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.636776 kubelet[2727]: E0514 23:41:36.636725 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.636996 kubelet[2727]: E0514 23:41:36.636974 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.636996 kubelet[2727]: W0514 23:41:36.636989 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.637063 kubelet[2727]: E0514 23:41:36.637003 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.637362 kubelet[2727]: E0514 23:41:36.637344 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.637362 kubelet[2727]: W0514 23:41:36.637360 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.637451 kubelet[2727]: E0514 23:41:36.637378 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.637644 kubelet[2727]: E0514 23:41:36.637628 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.637644 kubelet[2727]: W0514 23:41:36.637641 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.637745 kubelet[2727]: E0514 23:41:36.637671 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.637865 kubelet[2727]: E0514 23:41:36.637848 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.637865 kubelet[2727]: W0514 23:41:36.637860 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.637959 kubelet[2727]: E0514 23:41:36.637885 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.638120 kubelet[2727]: E0514 23:41:36.638099 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.638120 kubelet[2727]: W0514 23:41:36.638113 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.638187 kubelet[2727]: E0514 23:41:36.638130 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.638355 kubelet[2727]: E0514 23:41:36.638339 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.638355 kubelet[2727]: W0514 23:41:36.638351 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.638434 kubelet[2727]: E0514 23:41:36.638366 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.638590 kubelet[2727]: E0514 23:41:36.638573 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.638590 kubelet[2727]: W0514 23:41:36.638587 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.638672 kubelet[2727]: E0514 23:41:36.638602 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.638855 kubelet[2727]: E0514 23:41:36.638838 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.638855 kubelet[2727]: W0514 23:41:36.638850 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.638950 kubelet[2727]: E0514 23:41:36.638866 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.639175 kubelet[2727]: E0514 23:41:36.639154 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.639175 kubelet[2727]: W0514 23:41:36.639170 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.639278 kubelet[2727]: E0514 23:41:36.639188 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.639418 kubelet[2727]: E0514 23:41:36.639404 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.639418 kubelet[2727]: W0514 23:41:36.639416 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.639474 kubelet[2727]: E0514 23:41:36.639431 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.639705 kubelet[2727]: E0514 23:41:36.639691 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.639705 kubelet[2727]: W0514 23:41:36.639704 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.639751 kubelet[2727]: E0514 23:41:36.639718 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.640039 kubelet[2727]: E0514 23:41:36.640018 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.640039 kubelet[2727]: W0514 23:41:36.640037 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.640123 kubelet[2727]: E0514 23:41:36.640053 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:36.640278 kubelet[2727]: E0514 23:41:36.640263 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:41:36.640301 kubelet[2727]: W0514 23:41:36.640277 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:41:36.640301 kubelet[2727]: E0514 23:41:36.640290 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:41:37.053465 containerd[1499]: time="2025-05-14T23:41:37.053392408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:37.054344 containerd[1499]: time="2025-05-14T23:41:37.054286154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 23:41:37.055558 containerd[1499]: time="2025-05-14T23:41:37.055518274Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:37.057671 containerd[1499]: time="2025-05-14T23:41:37.057641957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:37.058368 containerd[1499]: time="2025-05-14T23:41:37.058335688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.833047628s" May 14 23:41:37.058443 containerd[1499]: time="2025-05-14T23:41:37.058373709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 23:41:37.060879 containerd[1499]: time="2025-05-14T23:41:37.060837950Z" level=info msg="CreateContainer within sandbox \"e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 23:41:37.071881 containerd[1499]: time="2025-05-14T23:41:37.071813453Z" level=info msg="Container 061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:37.084817 containerd[1499]: time="2025-05-14T23:41:37.084757297Z" level=info msg="CreateContainer within sandbox \"e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4\"" May 14 23:41:37.085536 containerd[1499]: time="2025-05-14T23:41:37.085389624Z" level=info msg="StartContainer for \"061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4\"" May 14 23:41:37.090301 containerd[1499]: time="2025-05-14T23:41:37.089778514Z" level=info msg="connecting to shim 061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4" address="unix:///run/containerd/s/51ff5a75e47ed8727f85b89ec2218657c9e5fb27f581a0c2e8cb92e1c7dc654e" protocol=ttrpc version=3 May 14 23:41:37.120849 systemd[1]: Started cri-containerd-061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4.scope - libcontainer container 061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4. May 14 23:41:37.185327 systemd[1]: cri-containerd-061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4.scope: Deactivated successfully. May 14 23:41:37.188643 containerd[1499]: time="2025-05-14T23:41:37.188596821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4\" id:\"061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4\" pid:3438 exited_at:{seconds:1747266097 nanos:187912959}" May 14 23:41:37.446380 containerd[1499]: time="2025-05-14T23:41:37.446236341Z" level=info msg="received exit event container_id:\"061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4\" id:\"061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4\" pid:3438 exited_at:{seconds:1747266097 nanos:187912959}" May 14 23:41:37.454716 containerd[1499]: time="2025-05-14T23:41:37.454679415Z" level=info msg="StartContainer for \"061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4\" returns successfully" May 14 23:41:37.467989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-061659522788813af1516999504f0df06882950d2882a2edac3c3ca3c2ba15b4-rootfs.mount: Deactivated successfully. May 14 23:41:37.501039 kubelet[2727]: E0514 23:41:37.500974 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6wg62" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" May 14 23:41:37.737733 kubelet[2727]: E0514 23:41:37.559072 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:37.956353 kubelet[2727]: I0514 23:41:37.956262 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54cd8c47bb-47h5q" podStartSLOduration=4.147541346 podStartE2EDuration="6.956238887s" podCreationTimestamp="2025-05-14 23:41:31 +0000 UTC" firstStartedPulling="2025-05-14 23:41:32.416452861 +0000 UTC m=+21.005555262" lastFinishedPulling="2025-05-14 23:41:35.225150402 +0000 UTC m=+23.814252803" observedRunningTime="2025-05-14 23:41:35.76991702 +0000 UTC m=+24.359019431" watchObservedRunningTime="2025-05-14 23:41:37.956238887 +0000 UTC m=+26.545341288" May 14 23:41:38.565185 kubelet[2727]: E0514 23:41:38.565122 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:38.569585 containerd[1499]: time="2025-05-14T23:41:38.569518285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 23:41:39.491885 kubelet[2727]: E0514 23:41:39.491734 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6wg62" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" May 14 23:41:41.038762 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:34008.service - OpenSSH per-connection server daemon (10.0.0.1:34008). May 14 23:41:41.091730 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 34008 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:41:41.093441 sshd-session[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:41.098994 systemd-logind[1486]: New session 9 of user core. May 14 23:41:41.107694 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:41:41.307425 sshd[3478]: Connection closed by 10.0.0.1 port 34008 May 14 23:41:41.307744 sshd-session[3476]: pam_unix(sshd:session): session closed for user core May 14 23:41:41.313025 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:34008.service: Deactivated successfully. May 14 23:41:41.315748 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:41:41.316759 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. May 14 23:41:41.317948 systemd-logind[1486]: Removed session 9. May 14 23:41:41.502749 kubelet[2727]: E0514 23:41:41.502141 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6wg62" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" May 14 23:41:43.299021 containerd[1499]: time="2025-05-14T23:41:43.298934755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:43.299817 containerd[1499]: time="2025-05-14T23:41:43.299719647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 23:41:43.301015 containerd[1499]: time="2025-05-14T23:41:43.300978678Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:43.303216 containerd[1499]: time="2025-05-14T23:41:43.303184155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:43.305908 containerd[1499]: time="2025-05-14T23:41:43.305857329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.736282858s" May 14 23:41:43.305983 containerd[1499]: time="2025-05-14T23:41:43.305907583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 23:41:43.308980 containerd[1499]: time="2025-05-14T23:41:43.308932276Z" level=info msg="CreateContainer within sandbox \"e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 23:41:43.319317 containerd[1499]: time="2025-05-14T23:41:43.319269243Z" level=info msg="Container 39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:43.329296 containerd[1499]: time="2025-05-14T23:41:43.329248821Z" level=info msg="CreateContainer within sandbox \"e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e\"" May 14 23:41:43.329873 containerd[1499]: time="2025-05-14T23:41:43.329834179Z" level=info msg="StartContainer for \"39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e\"" May 14 23:41:43.331508 containerd[1499]: time="2025-05-14T23:41:43.331452084Z" level=info msg="connecting to shim 39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e" address="unix:///run/containerd/s/51ff5a75e47ed8727f85b89ec2218657c9e5fb27f581a0c2e8cb92e1c7dc654e" protocol=ttrpc version=3 May 14 23:41:43.355646 systemd[1]: Started cri-containerd-39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e.scope - libcontainer container 39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e. May 14 23:41:43.406793 containerd[1499]: time="2025-05-14T23:41:43.406736293Z" level=info msg="StartContainer for \"39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e\" returns successfully" May 14 23:41:43.491807 kubelet[2727]: E0514 23:41:43.491380 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6wg62" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" May 14 23:41:43.577672 kubelet[2727]: E0514 23:41:43.577523 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:44.578992 kubelet[2727]: E0514 23:41:44.578958 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:45.237900 systemd[1]: cri-containerd-39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e.scope: Deactivated successfully. May 14 23:41:45.238355 systemd[1]: cri-containerd-39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e.scope: Consumed 603ms CPU time, 161M memory peak, 8K read from disk, 154M written to disk. May 14 23:41:45.239314 containerd[1499]: time="2025-05-14T23:41:45.239274513Z" level=info msg="received exit event container_id:\"39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e\" id:\"39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e\" pid:3511 exited_at:{seconds:1747266105 nanos:238981663}" May 14 23:41:45.239766 containerd[1499]: time="2025-05-14T23:41:45.239290723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e\" id:\"39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e\" pid:3511 exited_at:{seconds:1747266105 nanos:238981663}" May 14 23:41:45.264818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39b7178759ed863b434a44db9d9132158ac0bb3e64e60e6fa308b7025a7ae68e-rootfs.mount: Deactivated successfully. May 14 23:41:45.294578 kubelet[2727]: I0514 23:41:45.294533 2727 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 23:41:45.498828 systemd[1]: Created slice kubepods-besteffort-pod2231288d_40de_4b1d_a5cc_5c1b3be4909b.slice - libcontainer container kubepods-besteffort-pod2231288d_40de_4b1d_a5cc_5c1b3be4909b.slice. May 14 23:41:45.501965 containerd[1499]: time="2025-05-14T23:41:45.501904915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6wg62,Uid:2231288d-40de-4b1d-a5cc-5c1b3be4909b,Namespace:calico-system,Attempt:0,}" May 14 23:41:45.527834 kubelet[2727]: I0514 23:41:45.522946 2727 topology_manager.go:215] "Topology Admit Handler" podUID="fbea6818-76ed-4ce8-9c01-cffb17b8838f" podNamespace="calico-system" podName="calico-kube-controllers-598c5b97d5-d8vn5" May 14 23:41:45.530402 kubelet[2727]: I0514 23:41:45.529445 2727 topology_manager.go:215] "Topology Admit Handler" podUID="c4f9e896-1a7f-4713-ae61-8318d251676c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-skqbv" May 14 23:41:45.530402 kubelet[2727]: I0514 23:41:45.529717 2727 topology_manager.go:215] "Topology Admit Handler" podUID="8c1deacb-91fb-4a4a-8b93-e83b05a54eeb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wshnq" May 14 23:41:45.530659 kubelet[2727]: I0514 23:41:45.530395 2727 topology_manager.go:215] "Topology Admit Handler" podUID="fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235" podNamespace="calico-apiserver" podName="calico-apiserver-6c668dd479-lk74k" May 14 23:41:45.531765 kubelet[2727]: I0514 23:41:45.531462 2727 topology_manager.go:215] "Topology Admit Handler" podUID="c94e7212-5b50-4b5f-8ec3-5e1470b006a9" podNamespace="calico-apiserver" podName="calico-apiserver-6c668dd479-hrrxf" May 14 23:41:45.545181 systemd[1]: Created slice kubepods-besteffort-podfbea6818_76ed_4ce8_9c01_cffb17b8838f.slice - libcontainer container kubepods-besteffort-podfbea6818_76ed_4ce8_9c01_cffb17b8838f.slice. May 14 23:41:45.552293 systemd[1]: Created slice kubepods-burstable-pod8c1deacb_91fb_4a4a_8b93_e83b05a54eeb.slice - libcontainer container kubepods-burstable-pod8c1deacb_91fb_4a4a_8b93_e83b05a54eeb.slice. May 14 23:41:45.558675 systemd[1]: Created slice kubepods-burstable-podc4f9e896_1a7f_4713_ae61_8318d251676c.slice - libcontainer container kubepods-burstable-podc4f9e896_1a7f_4713_ae61_8318d251676c.slice. May 14 23:41:45.566537 systemd[1]: Created slice kubepods-besteffort-podfbbd9d27_e7f2_4ecf_ba40_ec5a5cd99235.slice - libcontainer container kubepods-besteffort-podfbbd9d27_e7f2_4ecf_ba40_ec5a5cd99235.slice. May 14 23:41:45.573238 systemd[1]: Created slice kubepods-besteffort-podc94e7212_5b50_4b5f_8ec3_5e1470b006a9.slice - libcontainer container kubepods-besteffort-podc94e7212_5b50_4b5f_8ec3_5e1470b006a9.slice. May 14 23:41:45.582898 kubelet[2727]: E0514 23:41:45.582853 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:45.583922 containerd[1499]: time="2025-05-14T23:41:45.583656659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 23:41:45.595239 kubelet[2727]: I0514 23:41:45.595178 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbea6818-76ed-4ce8-9c01-cffb17b8838f-tigera-ca-bundle\") pod \"calico-kube-controllers-598c5b97d5-d8vn5\" (UID: \"fbea6818-76ed-4ce8-9c01-cffb17b8838f\") " pod="calico-system/calico-kube-controllers-598c5b97d5-d8vn5" May 14 23:41:45.595239 kubelet[2727]: I0514 23:41:45.595217 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fhp\" (UniqueName: \"kubernetes.io/projected/8c1deacb-91fb-4a4a-8b93-e83b05a54eeb-kube-api-access-g5fhp\") pod \"coredns-7db6d8ff4d-wshnq\" (UID: \"8c1deacb-91fb-4a4a-8b93-e83b05a54eeb\") " pod="kube-system/coredns-7db6d8ff4d-wshnq" May 14 23:41:45.595239 kubelet[2727]: I0514 23:41:45.595237 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c94e7212-5b50-4b5f-8ec3-5e1470b006a9-calico-apiserver-certs\") pod \"calico-apiserver-6c668dd479-hrrxf\" (UID: \"c94e7212-5b50-4b5f-8ec3-5e1470b006a9\") " pod="calico-apiserver/calico-apiserver-6c668dd479-hrrxf" May 14 23:41:45.595239 kubelet[2727]: I0514 23:41:45.595253 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hqf5\" (UniqueName: \"kubernetes.io/projected/fbea6818-76ed-4ce8-9c01-cffb17b8838f-kube-api-access-7hqf5\") pod \"calico-kube-controllers-598c5b97d5-d8vn5\" (UID: \"fbea6818-76ed-4ce8-9c01-cffb17b8838f\") " pod="calico-system/calico-kube-controllers-598c5b97d5-d8vn5" May 14 23:41:45.595597 kubelet[2727]: I0514 23:41:45.595271 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235-calico-apiserver-certs\") pod \"calico-apiserver-6c668dd479-lk74k\" (UID: \"fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235\") " pod="calico-apiserver/calico-apiserver-6c668dd479-lk74k" May 14 23:41:45.595597 kubelet[2727]: I0514 23:41:45.595291 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5gqf\" (UniqueName: \"kubernetes.io/projected/fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235-kube-api-access-c5gqf\") pod \"calico-apiserver-6c668dd479-lk74k\" (UID: \"fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235\") " pod="calico-apiserver/calico-apiserver-6c668dd479-lk74k" May 14 23:41:45.595597 kubelet[2727]: I0514 23:41:45.595315 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5bdz\" (UniqueName: \"kubernetes.io/projected/c94e7212-5b50-4b5f-8ec3-5e1470b006a9-kube-api-access-v5bdz\") pod \"calico-apiserver-6c668dd479-hrrxf\" (UID: \"c94e7212-5b50-4b5f-8ec3-5e1470b006a9\") " pod="calico-apiserver/calico-apiserver-6c668dd479-hrrxf" May 14 23:41:45.595597 kubelet[2727]: I0514 23:41:45.595331 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4f9e896-1a7f-4713-ae61-8318d251676c-config-volume\") pod \"coredns-7db6d8ff4d-skqbv\" (UID: \"c4f9e896-1a7f-4713-ae61-8318d251676c\") " pod="kube-system/coredns-7db6d8ff4d-skqbv" May 14 23:41:45.595597 kubelet[2727]: I0514 23:41:45.595346 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1deacb-91fb-4a4a-8b93-e83b05a54eeb-config-volume\") pod \"coredns-7db6d8ff4d-wshnq\" (UID: \"8c1deacb-91fb-4a4a-8b93-e83b05a54eeb\") " pod="kube-system/coredns-7db6d8ff4d-wshnq" May 14 23:41:45.595818 kubelet[2727]: I0514 23:41:45.595362 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2mvq\" (UniqueName: \"kubernetes.io/projected/c4f9e896-1a7f-4713-ae61-8318d251676c-kube-api-access-r2mvq\") pod \"coredns-7db6d8ff4d-skqbv\" (UID: \"c4f9e896-1a7f-4713-ae61-8318d251676c\") " pod="kube-system/coredns-7db6d8ff4d-skqbv" May 14 23:41:45.624191 containerd[1499]: time="2025-05-14T23:41:45.624127677Z" level=error msg="Failed to destroy network for sandbox \"e225af3a749009665d79e6d520cab859ca980f85f2a96d58c8a9eb49cd7ae26e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:45.626885 systemd[1]: run-netns-cni\x2d21ba5fb5\x2d7669\x2d980b\x2dd7cf\x2db81c79749abc.mount: Deactivated successfully. May 14 23:41:45.685027 containerd[1499]: time="2025-05-14T23:41:45.684948574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6wg62,Uid:2231288d-40de-4b1d-a5cc-5c1b3be4909b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e225af3a749009665d79e6d520cab859ca980f85f2a96d58c8a9eb49cd7ae26e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:45.685272 kubelet[2727]: E0514 23:41:45.685221 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e225af3a749009665d79e6d520cab859ca980f85f2a96d58c8a9eb49cd7ae26e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:45.685334 kubelet[2727]: E0514 23:41:45.685283 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e225af3a749009665d79e6d520cab859ca980f85f2a96d58c8a9eb49cd7ae26e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6wg62" May 14 23:41:45.685334 kubelet[2727]: E0514 23:41:45.685307 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e225af3a749009665d79e6d520cab859ca980f85f2a96d58c8a9eb49cd7ae26e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6wg62" May 14 23:41:45.685414 kubelet[2727]: E0514 23:41:45.685365 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6wg62_calico-system(2231288d-40de-4b1d-a5cc-5c1b3be4909b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6wg62_calico-system(2231288d-40de-4b1d-a5cc-5c1b3be4909b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e225af3a749009665d79e6d520cab859ca980f85f2a96d58c8a9eb49cd7ae26e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6wg62" podUID="2231288d-40de-4b1d-a5cc-5c1b3be4909b" May 14 23:41:45.863550 kubelet[2727]: E0514 23:41:45.863475 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:45.864414 containerd[1499]: time="2025-05-14T23:41:45.864108148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-skqbv,Uid:c4f9e896-1a7f-4713-ae61-8318d251676c,Namespace:kube-system,Attempt:0,}" May 14 23:41:45.869938 containerd[1499]: time="2025-05-14T23:41:45.869899560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c668dd479-lk74k,Uid:fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235,Namespace:calico-apiserver,Attempt:0,}" May 14 23:41:45.876553 containerd[1499]: time="2025-05-14T23:41:45.876513716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c668dd479-hrrxf,Uid:c94e7212-5b50-4b5f-8ec3-5e1470b006a9,Namespace:calico-apiserver,Attempt:0,}" May 14 23:41:46.148953 containerd[1499]: time="2025-05-14T23:41:46.148784684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598c5b97d5-d8vn5,Uid:fbea6818-76ed-4ce8-9c01-cffb17b8838f,Namespace:calico-system,Attempt:0,}" May 14 23:41:46.156219 kubelet[2727]: E0514 23:41:46.156185 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:46.156870 containerd[1499]: time="2025-05-14T23:41:46.156820778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wshnq,Uid:8c1deacb-91fb-4a4a-8b93-e83b05a54eeb,Namespace:kube-system,Attempt:0,}" May 14 23:41:46.255862 containerd[1499]: time="2025-05-14T23:41:46.255801008Z" level=error msg="Failed to destroy network for sandbox \"788e3dbde94c543ad4372933daf7de8cbaf6548e2e845ba1fad21a0582c5ac2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.328879 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:34016.service - OpenSSH per-connection server daemon (10.0.0.1:34016). May 14 23:41:46.349756 containerd[1499]: time="2025-05-14T23:41:46.349691032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-skqbv,Uid:c4f9e896-1a7f-4713-ae61-8318d251676c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"788e3dbde94c543ad4372933daf7de8cbaf6548e2e845ba1fad21a0582c5ac2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.350512 kubelet[2727]: E0514 23:41:46.350159 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788e3dbde94c543ad4372933daf7de8cbaf6548e2e845ba1fad21a0582c5ac2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.350512 kubelet[2727]: E0514 23:41:46.350274 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788e3dbde94c543ad4372933daf7de8cbaf6548e2e845ba1fad21a0582c5ac2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-skqbv" May 14 23:41:46.350512 kubelet[2727]: E0514 23:41:46.350299 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788e3dbde94c543ad4372933daf7de8cbaf6548e2e845ba1fad21a0582c5ac2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-skqbv" May 14 23:41:46.351914 kubelet[2727]: E0514 23:41:46.350356 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-skqbv_kube-system(c4f9e896-1a7f-4713-ae61-8318d251676c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-skqbv_kube-system(c4f9e896-1a7f-4713-ae61-8318d251676c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"788e3dbde94c543ad4372933daf7de8cbaf6548e2e845ba1fad21a0582c5ac2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-skqbv" podUID="c4f9e896-1a7f-4713-ae61-8318d251676c" May 14 23:41:46.402956 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 34016 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:41:46.402859 sshd-session[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:46.403715 containerd[1499]: time="2025-05-14T23:41:46.403571642Z" level=error msg="Failed to destroy network for sandbox \"83520da11bb96a957e878fc5afbeeb0ac19d947340cf737dd707e1d86544bb38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.407561 systemd[1]: run-netns-cni\x2d59648c2a\x2de2f0\x2d4fb5\x2d3c75\x2da54d5a42ac2a.mount: Deactivated successfully. May 14 23:41:46.415084 containerd[1499]: time="2025-05-14T23:41:46.414340190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c668dd479-lk74k,Uid:fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83520da11bb96a957e878fc5afbeeb0ac19d947340cf737dd707e1d86544bb38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.415277 kubelet[2727]: E0514 23:41:46.415119 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83520da11bb96a957e878fc5afbeeb0ac19d947340cf737dd707e1d86544bb38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.415277 kubelet[2727]: E0514 23:41:46.415198 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83520da11bb96a957e878fc5afbeeb0ac19d947340cf737dd707e1d86544bb38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c668dd479-lk74k" May 14 23:41:46.415277 kubelet[2727]: E0514 23:41:46.415230 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83520da11bb96a957e878fc5afbeeb0ac19d947340cf737dd707e1d86544bb38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c668dd479-lk74k" May 14 23:41:46.415426 kubelet[2727]: E0514 23:41:46.415295 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c668dd479-lk74k_calico-apiserver(fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c668dd479-lk74k_calico-apiserver(fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83520da11bb96a957e878fc5afbeeb0ac19d947340cf737dd707e1d86544bb38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c668dd479-lk74k" podUID="fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235" May 14 23:41:46.417402 containerd[1499]: time="2025-05-14T23:41:46.417106418Z" level=error msg="Failed to destroy network for sandbox \"e6517c0dac9473e0b4600a9fdaa52c6bb6214229c6a05d90823fc965db07a8d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.418837 systemd-logind[1486]: New session 10 of user core. May 14 23:41:46.421522 containerd[1499]: time="2025-05-14T23:41:46.421347564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c668dd479-hrrxf,Uid:c94e7212-5b50-4b5f-8ec3-5e1470b006a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6517c0dac9473e0b4600a9fdaa52c6bb6214229c6a05d90823fc965db07a8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.421716 kubelet[2727]: E0514 23:41:46.421660 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6517c0dac9473e0b4600a9fdaa52c6bb6214229c6a05d90823fc965db07a8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.421808 kubelet[2727]: E0514 23:41:46.421752 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6517c0dac9473e0b4600a9fdaa52c6bb6214229c6a05d90823fc965db07a8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c668dd479-hrrxf" May 14 23:41:46.421808 kubelet[2727]: E0514 23:41:46.421781 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6517c0dac9473e0b4600a9fdaa52c6bb6214229c6a05d90823fc965db07a8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c668dd479-hrrxf" May 14 23:41:46.421882 kubelet[2727]: E0514 23:41:46.421840 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c668dd479-hrrxf_calico-apiserver(c94e7212-5b50-4b5f-8ec3-5e1470b006a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c668dd479-hrrxf_calico-apiserver(c94e7212-5b50-4b5f-8ec3-5e1470b006a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6517c0dac9473e0b4600a9fdaa52c6bb6214229c6a05d90823fc965db07a8d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c668dd479-hrrxf" podUID="c94e7212-5b50-4b5f-8ec3-5e1470b006a9" May 14 23:41:46.422007 systemd[1]: run-netns-cni\x2d17a6b910\x2d172e\x2d1c30\x2d7925\x2d7a6990538bf1.mount: Deactivated successfully. May 14 23:41:46.429738 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:41:46.434825 containerd[1499]: time="2025-05-14T23:41:46.434549876Z" level=error msg="Failed to destroy network for sandbox \"d0f7ddc8f5748909c01d5160541ca069acc1c0d2d40ab5c881a30d6de781f3ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.436862 containerd[1499]: time="2025-05-14T23:41:46.436737560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wshnq,Uid:8c1deacb-91fb-4a4a-8b93-e83b05a54eeb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f7ddc8f5748909c01d5160541ca069acc1c0d2d40ab5c881a30d6de781f3ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.437096 kubelet[2727]: E0514 23:41:46.437040 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f7ddc8f5748909c01d5160541ca069acc1c0d2d40ab5c881a30d6de781f3ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.437171 kubelet[2727]: E0514 23:41:46.437120 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f7ddc8f5748909c01d5160541ca069acc1c0d2d40ab5c881a30d6de781f3ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wshnq" May 14 23:41:46.437171 kubelet[2727]: E0514 23:41:46.437150 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f7ddc8f5748909c01d5160541ca069acc1c0d2d40ab5c881a30d6de781f3ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wshnq" May 14 23:41:46.437239 kubelet[2727]: E0514 23:41:46.437204 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wshnq_kube-system(8c1deacb-91fb-4a4a-8b93-e83b05a54eeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wshnq_kube-system(8c1deacb-91fb-4a4a-8b93-e83b05a54eeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0f7ddc8f5748909c01d5160541ca069acc1c0d2d40ab5c881a30d6de781f3ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wshnq" podUID="8c1deacb-91fb-4a4a-8b93-e83b05a54eeb" May 14 23:41:46.437872 containerd[1499]: time="2025-05-14T23:41:46.437837232Z" level=error msg="Failed to destroy network for sandbox \"da192a649053d207c8da22a91b808f94c5f61891b19e5fc3c7d4d3871337d284\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.439243 containerd[1499]: time="2025-05-14T23:41:46.439201321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598c5b97d5-d8vn5,Uid:fbea6818-76ed-4ce8-9c01-cffb17b8838f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da192a649053d207c8da22a91b808f94c5f61891b19e5fc3c7d4d3871337d284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.439507 kubelet[2727]: E0514 23:41:46.439453 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da192a649053d207c8da22a91b808f94c5f61891b19e5fc3c7d4d3871337d284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:41:46.439582 kubelet[2727]: E0514 23:41:46.439522 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da192a649053d207c8da22a91b808f94c5f61891b19e5fc3c7d4d3871337d284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598c5b97d5-d8vn5" May 14 23:41:46.439582 kubelet[2727]: E0514 23:41:46.439545 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da192a649053d207c8da22a91b808f94c5f61891b19e5fc3c7d4d3871337d284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598c5b97d5-d8vn5" May 14 23:41:46.439712 kubelet[2727]: E0514 23:41:46.439602 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-598c5b97d5-d8vn5_calico-system(fbea6818-76ed-4ce8-9c01-cffb17b8838f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-598c5b97d5-d8vn5_calico-system(fbea6818-76ed-4ce8-9c01-cffb17b8838f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da192a649053d207c8da22a91b808f94c5f61891b19e5fc3c7d4d3871337d284\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598c5b97d5-d8vn5" podUID="fbea6818-76ed-4ce8-9c01-cffb17b8838f" May 14 23:41:46.563397 sshd[3773]: Connection closed by 10.0.0.1 port 34016 May 14 23:41:46.563822 sshd-session[3642]: pam_unix(sshd:session): session closed for user core May 14 23:41:46.568115 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:34016.service: Deactivated successfully. May 14 23:41:46.570418 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:41:46.571119 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. May 14 23:41:46.572152 systemd-logind[1486]: Removed session 10. May 14 23:41:47.265017 systemd[1]: run-netns-cni\x2d02227040\x2d0cf3\x2d36fa\x2d684a\x2deefe44e12c8a.mount: Deactivated successfully. May 14 23:41:47.265156 systemd[1]: run-netns-cni\x2d28cf5062\x2d8fa9\x2dedaf\x2dbe71\x2d88733a837504.mount: Deactivated successfully. May 14 23:41:48.334177 kubelet[2727]: I0514 23:41:48.334108 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:41:48.337499 kubelet[2727]: E0514 23:41:48.336784 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:48.592846 kubelet[2727]: E0514 23:41:48.592700 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:51.581825 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:32938.service - OpenSSH per-connection server daemon (10.0.0.1:32938). May 14 23:41:51.640528 sshd[3793]: Accepted publickey for core from 10.0.0.1 port 32938 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:41:51.642894 sshd-session[3793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:51.649260 systemd-logind[1486]: New session 11 of user core. May 14 23:41:51.653609 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:41:51.789255 sshd[3795]: Connection closed by 10.0.0.1 port 32938 May 14 23:41:51.789622 sshd-session[3793]: pam_unix(sshd:session): session closed for user core May 14 23:41:51.795021 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:32938.service: Deactivated successfully. May 14 23:41:51.797298 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:41:51.798182 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. May 14 23:41:51.799577 systemd-logind[1486]: Removed session 11. May 14 23:41:53.201350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992376501.mount: Deactivated successfully. May 14 23:41:55.086960 containerd[1499]: time="2025-05-14T23:41:55.086882374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:55.088257 containerd[1499]: time="2025-05-14T23:41:55.088209503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 23:41:55.090032 containerd[1499]: time="2025-05-14T23:41:55.089961058Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:55.092163 containerd[1499]: time="2025-05-14T23:41:55.092124467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:55.092728 containerd[1499]: time="2025-05-14T23:41:55.092674689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 9.508976993s" May 14 23:41:55.092767 containerd[1499]: time="2025-05-14T23:41:55.092726988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 23:41:55.105992 containerd[1499]: time="2025-05-14T23:41:55.105944921Z" level=info msg="CreateContainer within sandbox \"e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 23:41:55.120794 containerd[1499]: time="2025-05-14T23:41:55.120714125Z" level=info msg="Container 3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:55.135136 containerd[1499]: time="2025-05-14T23:41:55.135075254Z" level=info msg="CreateContainer within sandbox \"e1ef89a5e61b69ce2b6ba77a968b3a79ec1604410b2c2431a320aa15daf1ada8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c\"" May 14 23:41:55.135910 containerd[1499]: time="2025-05-14T23:41:55.135770258Z" level=info msg="StartContainer for \"3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c\"" May 14 23:41:55.138035 containerd[1499]: time="2025-05-14T23:41:55.137979682Z" level=info msg="connecting to shim 3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c" address="unix:///run/containerd/s/51ff5a75e47ed8727f85b89ec2218657c9e5fb27f581a0c2e8cb92e1c7dc654e" protocol=ttrpc version=3 May 14 23:41:55.167752 systemd[1]: Started cri-containerd-3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c.scope - libcontainer container 3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c. May 14 23:41:55.243552 containerd[1499]: time="2025-05-14T23:41:55.242824601Z" level=info msg="StartContainer for \"3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c\" returns successfully" May 14 23:41:55.299290 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 23:41:55.299667 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 23:41:55.618993 kubelet[2727]: E0514 23:41:55.618453 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:55.767443 containerd[1499]: time="2025-05-14T23:41:55.767382491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c\" id:\"3fddae216f0ee34d98dc907cf222c1e1a3a8006596cfec4d1de3df6078a5e194\" pid:3886 exit_status:1 exited_at:{seconds:1747266115 nanos:767012407}" May 14 23:41:56.625619 kubelet[2727]: E0514 23:41:56.625569 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:56.801523 containerd[1499]: time="2025-05-14T23:41:56.800444978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c\" id:\"0487d3671d10324110e9c99e30a95d2dcb95745de78ea27a022ed83d8c62234a\" pid:4003 exit_status:1 exited_at:{seconds:1747266116 nanos:799257370}" May 14 23:41:56.805752 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:39876.service - OpenSSH per-connection server daemon (10.0.0.1:39876). May 14 23:41:56.807645 kernel: bpftool[4056]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 23:41:56.884226 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 39876 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:41:56.886466 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:56.892464 systemd-logind[1486]: New session 12 of user core. May 14 23:41:56.899690 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:41:57.451343 systemd-networkd[1439]: vxlan.calico: Link UP May 14 23:41:57.451357 systemd-networkd[1439]: vxlan.calico: Gained carrier May 14 23:41:57.458546 sshd[4059]: Connection closed by 10.0.0.1 port 39876 May 14 23:41:57.460153 sshd-session[4057]: pam_unix(sshd:session): session closed for user core May 14 23:41:57.468865 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:39876.service: Deactivated successfully. May 14 23:41:57.471697 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:41:57.473887 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. May 14 23:41:57.477114 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:39890.service - OpenSSH per-connection server daemon (10.0.0.1:39890). May 14 23:41:57.478862 systemd-logind[1486]: Removed session 12. May 14 23:41:57.493036 kubelet[2727]: E0514 23:41:57.492296 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:57.493425 containerd[1499]: time="2025-05-14T23:41:57.493359528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-skqbv,Uid:c4f9e896-1a7f-4713-ae61-8318d251676c,Namespace:kube-system,Attempt:0,}" May 14 23:41:57.541691 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 39890 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:41:57.544988 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:57.552388 systemd-logind[1486]: New session 13 of user core. May 14 23:41:57.558699 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:41:57.738957 systemd-networkd[1439]: calide7b2df8a8a: Link UP May 14 23:41:57.739196 systemd-networkd[1439]: calide7b2df8a8a: Gained carrier May 14 23:41:57.761760 sshd[4132]: Connection closed by 10.0.0.1 port 39890 May 14 23:41:57.763069 containerd[1499]: 2025-05-14 23:41:57.576 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0 coredns-7db6d8ff4d- kube-system c4f9e896-1a7f-4713-ae61-8318d251676c 774 0 2025-05-14 23:41:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-skqbv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide7b2df8a8a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skqbv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skqbv-" May 14 23:41:57.763069 containerd[1499]: 2025-05-14 23:41:57.577 [INFO][4118] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skqbv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" May 14 23:41:57.763069 containerd[1499]: 2025-05-14 23:41:57.669 [INFO][4135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" HandleID="k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Workload="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.682 [INFO][4135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" HandleID="k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Workload="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000503c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-skqbv", "timestamp":"2025-05-14 23:41:57.66923468 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.682 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.683 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.683 [INFO][4135] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.685 [INFO][4135] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" host="localhost" May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.695 [INFO][4135] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.701 [INFO][4135] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.706 [INFO][4135] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.709 [INFO][4135] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:41:57.763264 containerd[1499]: 2025-05-14 23:41:57.709 [INFO][4135] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" host="localhost" May 14 23:41:57.763573 containerd[1499]: 2025-05-14 23:41:57.711 [INFO][4135] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502 May 14 23:41:57.763573 containerd[1499]: 2025-05-14 23:41:57.716 [INFO][4135] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" host="localhost" May 14 23:41:57.763573 containerd[1499]: 2025-05-14 23:41:57.723 [INFO][4135] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" host="localhost" May 14 23:41:57.763573 containerd[1499]: 2025-05-14 23:41:57.723 [INFO][4135] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" host="localhost" May 14 23:41:57.763573 containerd[1499]: 2025-05-14 23:41:57.723 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:41:57.763573 containerd[1499]: 2025-05-14 23:41:57.723 [INFO][4135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" HandleID="k8s-pod-network.85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Workload="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" May 14 23:41:57.763719 containerd[1499]: 2025-05-14 23:41:57.727 [INFO][4118] cni-plugin/k8s.go 386: Populated endpoint ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skqbv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4f9e896-1a7f-4713-ae61-8318d251676c", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-skqbv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide7b2df8a8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:41:57.763798 containerd[1499]: 2025-05-14 23:41:57.728 [INFO][4118] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skqbv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" May 14 23:41:57.763798 containerd[1499]: 2025-05-14 23:41:57.728 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide7b2df8a8a ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skqbv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" May 14 23:41:57.763798 containerd[1499]: 2025-05-14 23:41:57.738 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skqbv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" May 14 23:41:57.764062 containerd[1499]: 2025-05-14 23:41:57.738 [INFO][4118] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skqbv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4f9e896-1a7f-4713-ae61-8318d251676c", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502", Pod:"coredns-7db6d8ff4d-skqbv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide7b2df8a8a", MAC:"d2:06:ab:f2:6e:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:41:57.764062 containerd[1499]: 2025-05-14 23:41:57.758 [INFO][4118] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skqbv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skqbv-eth0" May 14 23:41:57.767430 kubelet[2727]: I0514 23:41:57.766078 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jtcp9" podStartSLOduration=3.192252473 podStartE2EDuration="25.766051873s" podCreationTimestamp="2025-05-14 23:41:32 +0000 UTC" firstStartedPulling="2025-05-14 23:41:32.519742536 +0000 UTC m=+21.108844937" lastFinishedPulling="2025-05-14 23:41:55.093541926 +0000 UTC m=+43.682644337" observedRunningTime="2025-05-14 23:41:55.656754635 +0000 UTC m=+44.245857036" watchObservedRunningTime="2025-05-14 23:41:57.766051873 +0000 UTC m=+46.355154274" May 14 23:41:57.766558 sshd-session[4103]: pam_unix(sshd:session): session closed for user core May 14 23:41:57.777562 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:39890.service: Deactivated successfully. May 14 23:41:57.784068 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:41:57.789740 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. May 14 23:41:57.793567 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:39902.service - OpenSSH per-connection server daemon (10.0.0.1:39902). May 14 23:41:57.795341 systemd-logind[1486]: Removed session 13. May 14 23:41:57.876216 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 39902 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:41:57.878525 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:57.884843 systemd-logind[1486]: New session 14 of user core. May 14 23:41:57.892764 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:41:58.146278 sshd[4200]: Connection closed by 10.0.0.1 port 39902 May 14 23:41:58.146805 sshd-session[4172]: pam_unix(sshd:session): session closed for user core May 14 23:41:58.150189 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:39902.service: Deactivated successfully. May 14 23:41:58.152639 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:41:58.154788 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. May 14 23:41:58.155913 systemd-logind[1486]: Removed session 14. May 14 23:41:58.189838 containerd[1499]: time="2025-05-14T23:41:58.189772625Z" level=info msg="connecting to shim 85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502" address="unix:///run/containerd/s/2a6ad2738e2f672b94482e5a4ecd1422156a8987462f382f01c1d82f8f761ca6" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:58.222778 systemd[1]: Started cri-containerd-85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502.scope - libcontainer container 85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502. May 14 23:41:58.235820 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:41:58.312578 containerd[1499]: time="2025-05-14T23:41:58.312523867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-skqbv,Uid:c4f9e896-1a7f-4713-ae61-8318d251676c,Namespace:kube-system,Attempt:0,} returns sandbox id \"85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502\"" May 14 23:41:58.313611 kubelet[2727]: E0514 23:41:58.313570 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:58.315763 containerd[1499]: time="2025-05-14T23:41:58.315716145Z" level=info msg="CreateContainer within sandbox \"85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:41:58.493004 containerd[1499]: time="2025-05-14T23:41:58.492809055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c668dd479-lk74k,Uid:fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235,Namespace:calico-apiserver,Attempt:0,}" May 14 23:41:58.493457 containerd[1499]: time="2025-05-14T23:41:58.493367452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598c5b97d5-d8vn5,Uid:fbea6818-76ed-4ce8-9c01-cffb17b8838f,Namespace:calico-system,Attempt:0,}" May 14 23:41:58.494071 containerd[1499]: time="2025-05-14T23:41:58.493953602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6wg62,Uid:2231288d-40de-4b1d-a5cc-5c1b3be4909b,Namespace:calico-system,Attempt:0,}" May 14 23:41:58.581901 containerd[1499]: time="2025-05-14T23:41:58.581829273Z" level=info msg="Container 57c5ce21850ace579dde6caef23ba583ab11a24f6e859256088bfd9b634e63f6: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:58.586869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159386547.mount: Deactivated successfully. May 14 23:41:58.784668 containerd[1499]: time="2025-05-14T23:41:58.784473352Z" level=info msg="CreateContainer within sandbox \"85c014915e144ff348cf9a0705d92e054a47abbe2717851ada882c5000a5d502\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57c5ce21850ace579dde6caef23ba583ab11a24f6e859256088bfd9b634e63f6\"" May 14 23:41:58.787266 containerd[1499]: time="2025-05-14T23:41:58.787176382Z" level=info msg="StartContainer for \"57c5ce21850ace579dde6caef23ba583ab11a24f6e859256088bfd9b634e63f6\"" May 14 23:41:58.788561 containerd[1499]: time="2025-05-14T23:41:58.788518690Z" level=info msg="connecting to shim 57c5ce21850ace579dde6caef23ba583ab11a24f6e859256088bfd9b634e63f6" address="unix:///run/containerd/s/2a6ad2738e2f672b94482e5a4ecd1422156a8987462f382f01c1d82f8f761ca6" protocol=ttrpc version=3 May 14 23:41:58.833996 systemd[1]: Started cri-containerd-57c5ce21850ace579dde6caef23ba583ab11a24f6e859256088bfd9b634e63f6.scope - libcontainer container 57c5ce21850ace579dde6caef23ba583ab11a24f6e859256088bfd9b634e63f6. May 14 23:41:58.907825 containerd[1499]: time="2025-05-14T23:41:58.907585060Z" level=info msg="StartContainer for \"57c5ce21850ace579dde6caef23ba583ab11a24f6e859256088bfd9b634e63f6\" returns successfully" May 14 23:41:58.936846 systemd-networkd[1439]: calie1093c097db: Link UP May 14 23:41:58.937058 systemd-networkd[1439]: calie1093c097db: Gained carrier May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.818 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0 calico-apiserver-6c668dd479- calico-apiserver fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235 779 0 2025-05-14 23:41:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c668dd479 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c668dd479-lk74k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie1093c097db [] []}} ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-lk74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--lk74k-" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.819 [INFO][4261] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-lk74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.869 [INFO][4316] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" HandleID="k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Workload="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.886 [INFO][4316] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" HandleID="k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Workload="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c668dd479-lk74k", "timestamp":"2025-05-14 23:41:58.869306636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.886 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.886 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.886 [INFO][4316] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.888 [INFO][4316] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.893 [INFO][4316] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.899 [INFO][4316] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.905 [INFO][4316] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.909 [INFO][4316] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.910 [INFO][4316] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.913 [INFO][4316] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.918 [INFO][4316] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.924 [INFO][4316] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.924 [INFO][4316] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" host="localhost" May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.924 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:41:58.958857 containerd[1499]: 2025-05-14 23:41:58.924 [INFO][4316] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" HandleID="k8s-pod-network.1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Workload="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" May 14 23:41:58.959832 containerd[1499]: 2025-05-14 23:41:58.931 [INFO][4261] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-lk74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0", GenerateName:"calico-apiserver-6c668dd479-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c668dd479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c668dd479-lk74k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1093c097db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:41:58.959832 containerd[1499]: 2025-05-14 23:41:58.931 [INFO][4261] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-lk74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" May 14 23:41:58.959832 containerd[1499]: 2025-05-14 23:41:58.931 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1093c097db ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-lk74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" May 14 23:41:58.959832 containerd[1499]: 2025-05-14 23:41:58.937 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-lk74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" May 14 23:41:58.959832 containerd[1499]: 2025-05-14 23:41:58.938 [INFO][4261] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-lk74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0", GenerateName:"calico-apiserver-6c668dd479-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c668dd479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be", Pod:"calico-apiserver-6c668dd479-lk74k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1093c097db", MAC:"26:cf:64:c4:15:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:41:58.959832 containerd[1499]: 2025-05-14 23:41:58.952 [INFO][4261] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-lk74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--lk74k-eth0" May 14 23:41:58.992692 systemd-networkd[1439]: cali57a5706ddbf: Link UP May 14 23:41:58.995336 systemd-networkd[1439]: cali57a5706ddbf: Gained carrier May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.841 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6wg62-eth0 csi-node-driver- calico-system 2231288d-40de-4b1d-a5cc-5c1b3be4909b 613 0 2025-05-14 23:41:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6wg62 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali57a5706ddbf [] []}} ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Namespace="calico-system" Pod="csi-node-driver-6wg62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6wg62-" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.843 [INFO][4287] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Namespace="calico-system" Pod="csi-node-driver-6wg62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6wg62-eth0" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.897 [INFO][4331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" HandleID="k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Workload="localhost-k8s-csi--node--driver--6wg62-eth0" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.909 [INFO][4331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" HandleID="k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Workload="localhost-k8s-csi--node--driver--6wg62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6wg62", "timestamp":"2025-05-14 23:41:58.897323371 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.909 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.925 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.925 [INFO][4331] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.929 [INFO][4331] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.939 [INFO][4331] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.947 [INFO][4331] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.952 [INFO][4331] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.959 [INFO][4331] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.959 [INFO][4331] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.962 [INFO][4331] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428 May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.968 [INFO][4331] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.976 [INFO][4331] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.976 [INFO][4331] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" host="localhost" May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.976 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:41:59.023313 containerd[1499]: 2025-05-14 23:41:58.976 [INFO][4331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" HandleID="k8s-pod-network.81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Workload="localhost-k8s-csi--node--driver--6wg62-eth0" May 14 23:41:59.024793 containerd[1499]: 2025-05-14 23:41:58.983 [INFO][4287] cni-plugin/k8s.go 386: Populated endpoint ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Namespace="calico-system" Pod="csi-node-driver-6wg62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6wg62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6wg62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2231288d-40de-4b1d-a5cc-5c1b3be4909b", ResourceVersion:"613", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6wg62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57a5706ddbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:41:59.024793 containerd[1499]: 2025-05-14 23:41:58.983 [INFO][4287] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Namespace="calico-system" Pod="csi-node-driver-6wg62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6wg62-eth0" May 14 23:41:59.024793 containerd[1499]: 2025-05-14 23:41:58.983 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57a5706ddbf ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Namespace="calico-system" Pod="csi-node-driver-6wg62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6wg62-eth0" May 14 23:41:59.024793 containerd[1499]: 2025-05-14 23:41:58.996 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Namespace="calico-system" Pod="csi-node-driver-6wg62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6wg62-eth0" May 14 23:41:59.024793 containerd[1499]: 2025-05-14 23:41:58.998 [INFO][4287] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Namespace="calico-system" Pod="csi-node-driver-6wg62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6wg62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6wg62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2231288d-40de-4b1d-a5cc-5c1b3be4909b", ResourceVersion:"613", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428", Pod:"csi-node-driver-6wg62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57a5706ddbf", MAC:"ca:27:dc:0a:f9:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:41:59.024793 containerd[1499]: 2025-05-14 23:41:59.018 [INFO][4287] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" Namespace="calico-system" Pod="csi-node-driver-6wg62" WorkloadEndpoint="localhost-k8s-csi--node--driver--6wg62-eth0" May 14 23:41:59.045295 containerd[1499]: time="2025-05-14T23:41:59.044561437Z" level=info msg="connecting to shim 1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be" address="unix:///run/containerd/s/9860a8ea3cf8a9b90ac765efa807d07d83b79c197879e91a28dc338f5d134c87" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:59.073810 systemd-networkd[1439]: cali94b9b90bbc6: Link UP May 14 23:41:59.076748 systemd-networkd[1439]: cali94b9b90bbc6: Gained carrier May 14 23:41:59.088297 containerd[1499]: time="2025-05-14T23:41:59.087685713Z" level=info msg="connecting to shim 81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428" address="unix:///run/containerd/s/08432c38079d27048b93c93118e813ab6cdbe1445475a58d47367e363d1eaca4" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.847 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0 calico-kube-controllers-598c5b97d5- calico-system fbea6818-76ed-4ce8-9c01-cffb17b8838f 770 0 2025-05-14 23:41:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:598c5b97d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-598c5b97d5-d8vn5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali94b9b90bbc6 [] []}} ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Namespace="calico-system" Pod="calico-kube-controllers-598c5b97d5-d8vn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.847 [INFO][4272] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Namespace="calico-system" Pod="calico-kube-controllers-598c5b97d5-d8vn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.899 [INFO][4337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" HandleID="k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Workload="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.922 [INFO][4337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" HandleID="k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Workload="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-598c5b97d5-d8vn5", "timestamp":"2025-05-14 23:41:58.899708404 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.922 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.976 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.979 [INFO][4337] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.983 [INFO][4337] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:58.996 [INFO][4337] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.007 [INFO][4337] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.011 [INFO][4337] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.019 [INFO][4337] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.019 [INFO][4337] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.024 [INFO][4337] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.036 [INFO][4337] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.049 [INFO][4337] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.049 [INFO][4337] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" host="localhost" May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.049 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:41:59.100911 containerd[1499]: 2025-05-14 23:41:59.049 [INFO][4337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" HandleID="k8s-pod-network.b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Workload="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" May 14 23:41:59.101741 containerd[1499]: 2025-05-14 23:41:59.063 [INFO][4272] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Namespace="calico-system" Pod="calico-kube-controllers-598c5b97d5-d8vn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0", GenerateName:"calico-kube-controllers-598c5b97d5-", Namespace:"calico-system", SelfLink:"", UID:"fbea6818-76ed-4ce8-9c01-cffb17b8838f", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598c5b97d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-598c5b97d5-d8vn5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali94b9b90bbc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:41:59.101741 containerd[1499]: 2025-05-14 23:41:59.063 [INFO][4272] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Namespace="calico-system" Pod="calico-kube-controllers-598c5b97d5-d8vn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" May 14 23:41:59.101741 containerd[1499]: 2025-05-14 23:41:59.064 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94b9b90bbc6 ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Namespace="calico-system" Pod="calico-kube-controllers-598c5b97d5-d8vn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" May 14 23:41:59.101741 containerd[1499]: 2025-05-14 23:41:59.078 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Namespace="calico-system" Pod="calico-kube-controllers-598c5b97d5-d8vn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" May 14 23:41:59.101741 containerd[1499]: 2025-05-14 23:41:59.079 [INFO][4272] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Namespace="calico-system" Pod="calico-kube-controllers-598c5b97d5-d8vn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0", GenerateName:"calico-kube-controllers-598c5b97d5-", Namespace:"calico-system", SelfLink:"", UID:"fbea6818-76ed-4ce8-9c01-cffb17b8838f", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598c5b97d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa", Pod:"calico-kube-controllers-598c5b97d5-d8vn5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali94b9b90bbc6", MAC:"b2:17:23:6b:d1:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:41:59.101741 containerd[1499]: 2025-05-14 23:41:59.094 [INFO][4272] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" Namespace="calico-system" Pod="calico-kube-controllers-598c5b97d5-d8vn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598c5b97d5--d8vn5-eth0" May 14 23:41:59.114016 systemd[1]: Started cri-containerd-1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be.scope - libcontainer container 1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be. May 14 23:41:59.118821 systemd[1]: Started cri-containerd-81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428.scope - libcontainer container 81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428. May 14 23:41:59.141026 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:41:59.155034 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:41:59.167365 containerd[1499]: time="2025-05-14T23:41:59.167271564Z" level=info msg="connecting to shim b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa" address="unix:///run/containerd/s/6bd5b86628f1c71feb4c9fd903ffd41daef35f02ec3adef597e94d6f947ca472" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:59.201059 containerd[1499]: time="2025-05-14T23:41:59.201012277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c668dd479-lk74k,Uid:fbbd9d27-e7f2-4ecf-ba40-ec5a5cd99235,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be\"" May 14 23:41:59.206129 containerd[1499]: time="2025-05-14T23:41:59.203335324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 23:41:59.205835 systemd[1]: Started cri-containerd-b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa.scope - libcontainer container b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa. May 14 23:41:59.217958 containerd[1499]: time="2025-05-14T23:41:59.217708266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6wg62,Uid:2231288d-40de-4b1d-a5cc-5c1b3be4909b,Namespace:calico-system,Attempt:0,} returns sandbox id \"81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428\"" May 14 23:41:59.225860 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:41:59.245851 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL May 14 23:41:59.265787 containerd[1499]: time="2025-05-14T23:41:59.265738524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598c5b97d5-d8vn5,Uid:fbea6818-76ed-4ce8-9c01-cffb17b8838f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa\"" May 14 23:41:59.309780 systemd-networkd[1439]: calide7b2df8a8a: Gained IPv6LL May 14 23:41:59.643112 kubelet[2727]: E0514 23:41:59.642477 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:59.687596 kubelet[2727]: I0514 23:41:59.687309 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-skqbv" podStartSLOduration=34.686774878 podStartE2EDuration="34.686774878s" podCreationTimestamp="2025-05-14 23:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:41:59.685151703 +0000 UTC m=+48.274254114" watchObservedRunningTime="2025-05-14 23:41:59.686774878 +0000 UTC m=+48.275877279" May 14 23:42:00.269703 systemd-networkd[1439]: cali57a5706ddbf: Gained IPv6LL May 14 23:42:00.333891 systemd-networkd[1439]: cali94b9b90bbc6: Gained IPv6LL May 14 23:42:00.491906 kubelet[2727]: E0514 23:42:00.491851 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:00.492357 containerd[1499]: time="2025-05-14T23:42:00.492293219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wshnq,Uid:8c1deacb-91fb-4a4a-8b93-e83b05a54eeb,Namespace:kube-system,Attempt:0,}" May 14 23:42:00.653109 kubelet[2727]: E0514 23:42:00.652897 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:00.659993 systemd-networkd[1439]: calid2b122f173c: Link UP May 14 23:42:00.668614 systemd-networkd[1439]: calid2b122f173c: Gained carrier May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.527 [INFO][4558] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0 coredns-7db6d8ff4d- kube-system 8c1deacb-91fb-4a4a-8b93-e83b05a54eeb 782 0 2025-05-14 23:41:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-wshnq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2b122f173c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wshnq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wshnq-" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.527 [INFO][4558] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wshnq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.564 [INFO][4573] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" HandleID="k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Workload="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.574 [INFO][4573] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" HandleID="k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Workload="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012cc80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-wshnq", "timestamp":"2025-05-14 23:42:00.564290035 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.574 [INFO][4573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.574 [INFO][4573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.574 [INFO][4573] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.577 [INFO][4573] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.583 [INFO][4573] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.587 [INFO][4573] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.590 [INFO][4573] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.595 [INFO][4573] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.595 [INFO][4573] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.599 [INFO][4573] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945 May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.613 [INFO][4573] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.630 [INFO][4573] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.630 [INFO][4573] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" host="localhost" May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.630 [INFO][4573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:42:00.682131 containerd[1499]: 2025-05-14 23:42:00.630 [INFO][4573] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" HandleID="k8s-pod-network.ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Workload="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" May 14 23:42:00.682823 containerd[1499]: 2025-05-14 23:42:00.655 [INFO][4558] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wshnq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8c1deacb-91fb-4a4a-8b93-e83b05a54eeb", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-wshnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2b122f173c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:42:00.682823 containerd[1499]: 2025-05-14 23:42:00.656 [INFO][4558] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wshnq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" May 14 23:42:00.682823 containerd[1499]: 2025-05-14 23:42:00.656 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2b122f173c ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wshnq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" May 14 23:42:00.682823 containerd[1499]: 2025-05-14 23:42:00.659 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wshnq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" May 14 23:42:00.682823 containerd[1499]: 2025-05-14 23:42:00.659 [INFO][4558] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wshnq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8c1deacb-91fb-4a4a-8b93-e83b05a54eeb", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945", Pod:"coredns-7db6d8ff4d-wshnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2b122f173c", MAC:"16:09:b8:14:ae:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:42:00.682823 containerd[1499]: 2025-05-14 23:42:00.675 [INFO][4558] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wshnq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wshnq-eth0" May 14 23:42:00.759843 containerd[1499]: time="2025-05-14T23:42:00.759768447Z" level=info msg="connecting to shim ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945" address="unix:///run/containerd/s/1f31891f6566a0cab8f63ddb86f900f472abba7fcee2c6953fc893e9c654cfeb" namespace=k8s.io protocol=ttrpc version=3 May 14 23:42:00.798829 systemd[1]: Started cri-containerd-ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945.scope - libcontainer container ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945. May 14 23:42:00.812747 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:42:00.846649 systemd-networkd[1439]: calie1093c097db: Gained IPv6LL May 14 23:42:00.933476 containerd[1499]: time="2025-05-14T23:42:00.933379188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wshnq,Uid:8c1deacb-91fb-4a4a-8b93-e83b05a54eeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945\"" May 14 23:42:00.934550 kubelet[2727]: E0514 23:42:00.934510 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:00.937371 containerd[1499]: time="2025-05-14T23:42:00.937162584Z" level=info msg="CreateContainer within sandbox \"ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:42:00.958610 containerd[1499]: time="2025-05-14T23:42:00.955849659Z" level=info msg="Container 66a438c458f1c4f2975061cbd52ad03632c61754310d61f13d1d39da06e8b31b: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:00.972440 containerd[1499]: time="2025-05-14T23:42:00.972372334Z" level=info msg="CreateContainer within sandbox \"ad9a158f32591ccc46f67a02ef91d17b9e1261644458e16ca528305b33d06945\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66a438c458f1c4f2975061cbd52ad03632c61754310d61f13d1d39da06e8b31b\"" May 14 23:42:00.973169 containerd[1499]: time="2025-05-14T23:42:00.973129353Z" level=info msg="StartContainer for \"66a438c458f1c4f2975061cbd52ad03632c61754310d61f13d1d39da06e8b31b\"" May 14 23:42:00.974089 containerd[1499]: time="2025-05-14T23:42:00.974038700Z" level=info msg="connecting to shim 66a438c458f1c4f2975061cbd52ad03632c61754310d61f13d1d39da06e8b31b" address="unix:///run/containerd/s/1f31891f6566a0cab8f63ddb86f900f472abba7fcee2c6953fc893e9c654cfeb" protocol=ttrpc version=3 May 14 23:42:01.009628 systemd[1]: Started cri-containerd-66a438c458f1c4f2975061cbd52ad03632c61754310d61f13d1d39da06e8b31b.scope - libcontainer container 66a438c458f1c4f2975061cbd52ad03632c61754310d61f13d1d39da06e8b31b. May 14 23:42:01.049510 containerd[1499]: time="2025-05-14T23:42:01.049399186Z" level=info msg="StartContainer for \"66a438c458f1c4f2975061cbd52ad03632c61754310d61f13d1d39da06e8b31b\" returns successfully" May 14 23:42:01.492497 containerd[1499]: time="2025-05-14T23:42:01.492416561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c668dd479-hrrxf,Uid:c94e7212-5b50-4b5f-8ec3-5e1470b006a9,Namespace:calico-apiserver,Attempt:0,}" May 14 23:42:01.656569 kubelet[2727]: E0514 23:42:01.656472 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:01.657056 kubelet[2727]: E0514 23:42:01.656674 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:01.766781 kubelet[2727]: I0514 23:42:01.766426 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wshnq" podStartSLOduration=36.76639442 podStartE2EDuration="36.76639442s" podCreationTimestamp="2025-05-14 23:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:42:01.764934351 +0000 UTC m=+50.354036752" watchObservedRunningTime="2025-05-14 23:42:01.76639442 +0000 UTC m=+50.355496811" May 14 23:42:01.807009 systemd-networkd[1439]: calid2b122f173c: Gained IPv6LL May 14 23:42:02.110749 systemd-networkd[1439]: calia32c6e509e8: Link UP May 14 23:42:02.111396 systemd-networkd[1439]: calia32c6e509e8: Gained carrier May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.776 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0 calico-apiserver-6c668dd479- calico-apiserver c94e7212-5b50-4b5f-8ec3-5e1470b006a9 785 0 2025-05-14 23:41:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c668dd479 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c668dd479-hrrxf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia32c6e509e8 [] []}} ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-hrrxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.776 [INFO][4678] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-hrrxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.980 [INFO][4694] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" HandleID="k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Workload="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.987 [INFO][4694] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" HandleID="k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Workload="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcd10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c668dd479-hrrxf", "timestamp":"2025-05-14 23:42:01.980017861 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.987 [INFO][4694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.987 [INFO][4694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.987 [INFO][4694] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.988 [INFO][4694] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.992 [INFO][4694] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.995 [INFO][4694] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.996 [INFO][4694] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.998 [INFO][4694] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.998 [INFO][4694] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:01.999 [INFO][4694] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:02.022 [INFO][4694] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:02.104 [INFO][4694] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:02.104 [INFO][4694] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" host="localhost" May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:02.104 [INFO][4694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:42:02.186015 containerd[1499]: 2025-05-14 23:42:02.104 [INFO][4694] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" HandleID="k8s-pod-network.668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Workload="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" May 14 23:42:02.187231 containerd[1499]: 2025-05-14 23:42:02.108 [INFO][4678] cni-plugin/k8s.go 386: Populated endpoint ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-hrrxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0", GenerateName:"calico-apiserver-6c668dd479-", Namespace:"calico-apiserver", SelfLink:"", UID:"c94e7212-5b50-4b5f-8ec3-5e1470b006a9", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c668dd479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c668dd479-hrrxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia32c6e509e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:42:02.187231 containerd[1499]: 2025-05-14 23:42:02.108 [INFO][4678] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-hrrxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" May 14 23:42:02.187231 containerd[1499]: 2025-05-14 23:42:02.108 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia32c6e509e8 ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-hrrxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" May 14 23:42:02.187231 containerd[1499]: 2025-05-14 23:42:02.110 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-hrrxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" May 14 23:42:02.187231 containerd[1499]: 2025-05-14 23:42:02.111 [INFO][4678] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-hrrxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0", GenerateName:"calico-apiserver-6c668dd479-", Namespace:"calico-apiserver", SelfLink:"", UID:"c94e7212-5b50-4b5f-8ec3-5e1470b006a9", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c668dd479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d", Pod:"calico-apiserver-6c668dd479-hrrxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia32c6e509e8", MAC:"4e:1e:46:76:b3:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:42:02.187231 containerd[1499]: 2025-05-14 23:42:02.182 [INFO][4678] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" Namespace="calico-apiserver" Pod="calico-apiserver-6c668dd479-hrrxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c668dd479--hrrxf-eth0" May 14 23:42:02.424790 containerd[1499]: time="2025-05-14T23:42:02.424641795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:02.647622 containerd[1499]: time="2025-05-14T23:42:02.647526692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 23:42:02.658027 kubelet[2727]: E0514 23:42:02.657912 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:02.812238 containerd[1499]: time="2025-05-14T23:42:02.812176126Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:02.974016 containerd[1499]: time="2025-05-14T23:42:02.973939064Z" level=info msg="connecting to shim 668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d" address="unix:///run/containerd/s/f247b131578c6b62c8c3c6675d47b343fe2b5524eef266d0e980361a6cd958d4" namespace=k8s.io protocol=ttrpc version=3 May 14 23:42:02.997826 containerd[1499]: time="2025-05-14T23:42:02.997777583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:02.998774 containerd[1499]: time="2025-05-14T23:42:02.998747842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.795385188s" May 14 23:42:02.998774 containerd[1499]: time="2025-05-14T23:42:02.998778710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 23:42:03.000447 containerd[1499]: time="2025-05-14T23:42:03.000400694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 23:42:03.001666 containerd[1499]: time="2025-05-14T23:42:03.001615963Z" level=info msg="CreateContainer within sandbox \"1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 23:42:03.019667 systemd[1]: Started cri-containerd-668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d.scope - libcontainer container 668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d. May 14 23:42:03.033841 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:42:03.164612 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:39914.service - OpenSSH per-connection server daemon (10.0.0.1:39914). May 14 23:42:03.194650 containerd[1499]: time="2025-05-14T23:42:03.194237090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c668dd479-hrrxf,Uid:c94e7212-5b50-4b5f-8ec3-5e1470b006a9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d\"" May 14 23:42:03.202793 containerd[1499]: time="2025-05-14T23:42:03.200609504Z" level=info msg="CreateContainer within sandbox \"668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 23:42:03.469735 systemd-networkd[1439]: calia32c6e509e8: Gained IPv6LL May 14 23:42:03.660091 kubelet[2727]: E0514 23:42:03.660049 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:03.801618 sshd[4775]: Accepted publickey for core from 10.0.0.1 port 39914 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:03.804206 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:03.809623 systemd-logind[1486]: New session 15 of user core. May 14 23:42:03.815630 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:42:03.939527 containerd[1499]: time="2025-05-14T23:42:03.939458447Z" level=info msg="Container 845079d096e018dbb64007de58a5a5b32172dc26ec6afdceb41c4f563d235e4f: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:03.947505 containerd[1499]: time="2025-05-14T23:42:03.946567893Z" level=info msg="Container cd6dec39f8bf29f156115f2fb526f64f7ae648a8e8129206b81d547cffedcd1f: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:03.965769 sshd[4777]: Connection closed by 10.0.0.1 port 39914 May 14 23:42:03.966788 sshd-session[4775]: pam_unix(sshd:session): session closed for user core May 14 23:42:03.971565 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:39914.service: Deactivated successfully. May 14 23:42:03.973910 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:42:03.975998 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. May 14 23:42:03.977029 systemd-logind[1486]: Removed session 15. May 14 23:42:04.033124 containerd[1499]: time="2025-05-14T23:42:04.033066556Z" level=info msg="CreateContainer within sandbox \"1a02bb7164fb33c41079c58eb47d2edfe6793fa88a4b79182e0608f349c752be\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"845079d096e018dbb64007de58a5a5b32172dc26ec6afdceb41c4f563d235e4f\"" May 14 23:42:04.033725 containerd[1499]: time="2025-05-14T23:42:04.033691869Z" level=info msg="StartContainer for \"845079d096e018dbb64007de58a5a5b32172dc26ec6afdceb41c4f563d235e4f\"" May 14 23:42:04.034756 containerd[1499]: time="2025-05-14T23:42:04.034729677Z" level=info msg="connecting to shim 845079d096e018dbb64007de58a5a5b32172dc26ec6afdceb41c4f563d235e4f" address="unix:///run/containerd/s/9860a8ea3cf8a9b90ac765efa807d07d83b79c197879e91a28dc338f5d134c87" protocol=ttrpc version=3 May 14 23:42:04.057633 systemd[1]: Started cri-containerd-845079d096e018dbb64007de58a5a5b32172dc26ec6afdceb41c4f563d235e4f.scope - libcontainer container 845079d096e018dbb64007de58a5a5b32172dc26ec6afdceb41c4f563d235e4f. May 14 23:42:04.108170 containerd[1499]: time="2025-05-14T23:42:04.108107676Z" level=info msg="CreateContainer within sandbox \"668be2daf7290340346eaef1b54f597e65d588d5ab71726d22e009e24cb6725d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cd6dec39f8bf29f156115f2fb526f64f7ae648a8e8129206b81d547cffedcd1f\"" May 14 23:42:04.108946 containerd[1499]: time="2025-05-14T23:42:04.108912466Z" level=info msg="StartContainer for \"cd6dec39f8bf29f156115f2fb526f64f7ae648a8e8129206b81d547cffedcd1f\"" May 14 23:42:04.110459 containerd[1499]: time="2025-05-14T23:42:04.110096898Z" level=info msg="connecting to shim cd6dec39f8bf29f156115f2fb526f64f7ae648a8e8129206b81d547cffedcd1f" address="unix:///run/containerd/s/f247b131578c6b62c8c3c6675d47b343fe2b5524eef266d0e980361a6cd958d4" protocol=ttrpc version=3 May 14 23:42:04.134636 systemd[1]: Started cri-containerd-cd6dec39f8bf29f156115f2fb526f64f7ae648a8e8129206b81d547cffedcd1f.scope - libcontainer container cd6dec39f8bf29f156115f2fb526f64f7ae648a8e8129206b81d547cffedcd1f. May 14 23:42:04.292130 containerd[1499]: time="2025-05-14T23:42:04.292067016Z" level=info msg="StartContainer for \"cd6dec39f8bf29f156115f2fb526f64f7ae648a8e8129206b81d547cffedcd1f\" returns successfully" May 14 23:42:04.294313 containerd[1499]: time="2025-05-14T23:42:04.294026302Z" level=info msg="StartContainer for \"845079d096e018dbb64007de58a5a5b32172dc26ec6afdceb41c4f563d235e4f\" returns successfully" May 14 23:42:04.668467 kubelet[2727]: E0514 23:42:04.668427 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:04.747353 kubelet[2727]: I0514 23:42:04.747153 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c668dd479-hrrxf" podStartSLOduration=32.747121484 podStartE2EDuration="32.747121484s" podCreationTimestamp="2025-05-14 23:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:42:04.739045424 +0000 UTC m=+53.328147835" watchObservedRunningTime="2025-05-14 23:42:04.747121484 +0000 UTC m=+53.336223885" May 14 23:42:05.196739 containerd[1499]: time="2025-05-14T23:42:05.196680715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:05.198629 containerd[1499]: time="2025-05-14T23:42:05.198553698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 23:42:05.200153 containerd[1499]: time="2025-05-14T23:42:05.200127971Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:05.203535 containerd[1499]: time="2025-05-14T23:42:05.203474950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:05.204030 containerd[1499]: time="2025-05-14T23:42:05.203998783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.203557774s" May 14 23:42:05.204123 containerd[1499]: time="2025-05-14T23:42:05.204035652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 23:42:05.205378 containerd[1499]: time="2025-05-14T23:42:05.205344597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 23:42:05.206692 containerd[1499]: time="2025-05-14T23:42:05.206659364Z" level=info msg="CreateContainer within sandbox \"81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 23:42:05.234756 containerd[1499]: time="2025-05-14T23:42:05.234685809Z" level=info msg="Container cb043126f0910895752d35597bbfdae6a3e9325a6fedc1239aebbaab2bb6a555: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:05.240762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266941709.mount: Deactivated successfully. May 14 23:42:05.251832 containerd[1499]: time="2025-05-14T23:42:05.251762723Z" level=info msg="CreateContainer within sandbox \"81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cb043126f0910895752d35597bbfdae6a3e9325a6fedc1239aebbaab2bb6a555\"" May 14 23:42:05.254373 containerd[1499]: time="2025-05-14T23:42:05.254307567Z" level=info msg="StartContainer for \"cb043126f0910895752d35597bbfdae6a3e9325a6fedc1239aebbaab2bb6a555\"" May 14 23:42:05.256368 containerd[1499]: time="2025-05-14T23:42:05.256032293Z" level=info msg="connecting to shim cb043126f0910895752d35597bbfdae6a3e9325a6fedc1239aebbaab2bb6a555" address="unix:///run/containerd/s/08432c38079d27048b93c93118e813ab6cdbe1445475a58d47367e363d1eaca4" protocol=ttrpc version=3 May 14 23:42:05.286818 systemd[1]: Started cri-containerd-cb043126f0910895752d35597bbfdae6a3e9325a6fedc1239aebbaab2bb6a555.scope - libcontainer container cb043126f0910895752d35597bbfdae6a3e9325a6fedc1239aebbaab2bb6a555. May 14 23:42:05.349725 containerd[1499]: time="2025-05-14T23:42:05.349432995Z" level=info msg="StartContainer for \"cb043126f0910895752d35597bbfdae6a3e9325a6fedc1239aebbaab2bb6a555\" returns successfully" May 14 23:42:05.672215 kubelet[2727]: I0514 23:42:05.672170 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:42:05.672826 kubelet[2727]: I0514 23:42:05.672170 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:42:07.270706 containerd[1499]: time="2025-05-14T23:42:07.270650285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c\" id:\"77bf90f1d203f42b64b05c702907cf1c62bbb66c865a83a90bb8c0bd315527b9\" pid:4917 exited_at:{seconds:1747266127 nanos:270205081}" May 14 23:42:07.273432 kubelet[2727]: E0514 23:42:07.273365 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.291406 kubelet[2727]: I0514 23:42:07.290959 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c668dd479-lk74k" podStartSLOduration=31.494308742 podStartE2EDuration="35.290939466s" podCreationTimestamp="2025-05-14 23:41:32 +0000 UTC" firstStartedPulling="2025-05-14 23:41:59.203064306 +0000 UTC m=+47.792166707" lastFinishedPulling="2025-05-14 23:42:02.99969503 +0000 UTC m=+51.588797431" observedRunningTime="2025-05-14 23:42:04.898876762 +0000 UTC m=+53.487979163" watchObservedRunningTime="2025-05-14 23:42:07.290939466 +0000 UTC m=+55.880041867" May 14 23:42:07.431926 containerd[1499]: time="2025-05-14T23:42:07.431868038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:07.433003 containerd[1499]: time="2025-05-14T23:42:07.432933396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 23:42:07.434195 containerd[1499]: time="2025-05-14T23:42:07.434147764Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:07.437816 containerd[1499]: time="2025-05-14T23:42:07.437785218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:07.438818 containerd[1499]: time="2025-05-14T23:42:07.438753173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.23336762s" May 14 23:42:07.438818 containerd[1499]: time="2025-05-14T23:42:07.438818055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 23:42:07.440371 containerd[1499]: time="2025-05-14T23:42:07.439888874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 23:42:07.449995 containerd[1499]: time="2025-05-14T23:42:07.449937174Z" level=info msg="CreateContainer within sandbox \"b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 23:42:07.459672 containerd[1499]: time="2025-05-14T23:42:07.459616922Z" level=info msg="Container 1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:07.469566 containerd[1499]: time="2025-05-14T23:42:07.469517144Z" level=info msg="CreateContainer within sandbox \"b667cba6012c4988323e92b76f7d89f6ca56203e34a328efeea6014b111ce0fa\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d\"" May 14 23:42:07.471596 containerd[1499]: time="2025-05-14T23:42:07.470335349Z" level=info msg="StartContainer for \"1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d\"" May 14 23:42:07.471671 containerd[1499]: time="2025-05-14T23:42:07.471636941Z" level=info msg="connecting to shim 1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d" address="unix:///run/containerd/s/6bd5b86628f1c71feb4c9fd903ffd41daef35f02ec3adef597e94d6f947ca472" protocol=ttrpc version=3 May 14 23:42:07.496694 systemd[1]: Started cri-containerd-1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d.scope - libcontainer container 1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d. May 14 23:42:07.550664 containerd[1499]: time="2025-05-14T23:42:07.550527133Z" level=info msg="StartContainer for \"1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d\" returns successfully" May 14 23:42:07.695718 kubelet[2727]: I0514 23:42:07.695510 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-598c5b97d5-d8vn5" podStartSLOduration=27.52284142 podStartE2EDuration="35.695461289s" podCreationTimestamp="2025-05-14 23:41:32 +0000 UTC" firstStartedPulling="2025-05-14 23:41:59.267100389 +0000 UTC m=+47.856202790" lastFinishedPulling="2025-05-14 23:42:07.439720258 +0000 UTC m=+56.028822659" observedRunningTime="2025-05-14 23:42:07.693166243 +0000 UTC m=+56.282268644" watchObservedRunningTime="2025-05-14 23:42:07.695461289 +0000 UTC m=+56.284563690" May 14 23:42:07.729879 containerd[1499]: time="2025-05-14T23:42:07.729824080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d\" id:\"26021f076f3791146e76bee0adda1bd1e4c5791a5a04a856cfb11a19775b7990\" pid:4978 exited_at:{seconds:1747266127 nanos:729519069}" May 14 23:42:08.980567 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:43544.service - OpenSSH per-connection server daemon (10.0.0.1:43544). May 14 23:42:09.276658 sshd[4999]: Accepted publickey for core from 10.0.0.1 port 43544 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:09.288405 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:09.293837 systemd-logind[1486]: New session 16 of user core. May 14 23:42:09.301683 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:42:09.347029 containerd[1499]: time="2025-05-14T23:42:09.346949816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:09.390789 containerd[1499]: time="2025-05-14T23:42:09.390699728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 23:42:09.431010 containerd[1499]: time="2025-05-14T23:42:09.430916782Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:09.438520 sshd[5001]: Connection closed by 10.0.0.1 port 43544 May 14 23:42:09.438984 sshd-session[4999]: pam_unix(sshd:session): session closed for user core May 14 23:42:09.443714 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:43544.service: Deactivated successfully. May 14 23:42:09.446438 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:42:09.447524 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. May 14 23:42:09.448423 systemd-logind[1486]: Removed session 16. May 14 23:42:09.476847 containerd[1499]: time="2025-05-14T23:42:09.476757246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:09.477430 containerd[1499]: time="2025-05-14T23:42:09.477375786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.037455243s" May 14 23:42:09.477430 containerd[1499]: time="2025-05-14T23:42:09.477424057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 23:42:09.479674 containerd[1499]: time="2025-05-14T23:42:09.479629654Z" level=info msg="CreateContainer within sandbox \"81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 23:42:09.660217 containerd[1499]: time="2025-05-14T23:42:09.659281661Z" level=info msg="Container 3d3d3280b6f267200f2db790cf9b56899e76e7d6f9b59c8d5b41a7d924a62e4a: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:09.699712 containerd[1499]: time="2025-05-14T23:42:09.699647685Z" level=info msg="CreateContainer within sandbox \"81d7cdcc94f4e3fc2b193d60a89bbda8342f7f71cbc26cf3b0466d929d4e5428\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3d3d3280b6f267200f2db790cf9b56899e76e7d6f9b59c8d5b41a7d924a62e4a\"" May 14 23:42:09.706289 containerd[1499]: time="2025-05-14T23:42:09.706222900Z" level=info msg="StartContainer for \"3d3d3280b6f267200f2db790cf9b56899e76e7d6f9b59c8d5b41a7d924a62e4a\"" May 14 23:42:09.707939 containerd[1499]: time="2025-05-14T23:42:09.707898282Z" level=info msg="connecting to shim 3d3d3280b6f267200f2db790cf9b56899e76e7d6f9b59c8d5b41a7d924a62e4a" address="unix:///run/containerd/s/08432c38079d27048b93c93118e813ab6cdbe1445475a58d47367e363d1eaca4" protocol=ttrpc version=3 May 14 23:42:09.742761 systemd[1]: Started cri-containerd-3d3d3280b6f267200f2db790cf9b56899e76e7d6f9b59c8d5b41a7d924a62e4a.scope - libcontainer container 3d3d3280b6f267200f2db790cf9b56899e76e7d6f9b59c8d5b41a7d924a62e4a. May 14 23:42:09.792547 containerd[1499]: time="2025-05-14T23:42:09.792499528Z" level=info msg="StartContainer for \"3d3d3280b6f267200f2db790cf9b56899e76e7d6f9b59c8d5b41a7d924a62e4a\" returns successfully" May 14 23:42:10.557072 kubelet[2727]: I0514 23:42:10.557001 2727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 23:42:10.557755 kubelet[2727]: I0514 23:42:10.557130 2727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 23:42:10.782966 kubelet[2727]: I0514 23:42:10.782889 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6wg62" podStartSLOduration=28.524350857 podStartE2EDuration="38.782869663s" podCreationTimestamp="2025-05-14 23:41:32 +0000 UTC" firstStartedPulling="2025-05-14 23:41:59.219722725 +0000 UTC m=+47.808825126" lastFinishedPulling="2025-05-14 23:42:09.478241531 +0000 UTC m=+58.067343932" observedRunningTime="2025-05-14 23:42:10.782444747 +0000 UTC m=+59.371547158" watchObservedRunningTime="2025-05-14 23:42:10.782869663 +0000 UTC m=+59.371972064" May 14 23:42:12.108811 kubelet[2727]: I0514 23:42:12.108727 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:42:14.454355 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:43550.service - OpenSSH per-connection server daemon (10.0.0.1:43550). May 14 23:42:14.516218 sshd[5056]: Accepted publickey for core from 10.0.0.1 port 43550 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:14.517925 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:14.522453 systemd-logind[1486]: New session 17 of user core. May 14 23:42:14.528618 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:42:14.675240 sshd[5058]: Connection closed by 10.0.0.1 port 43550 May 14 23:42:14.675589 sshd-session[5056]: pam_unix(sshd:session): session closed for user core May 14 23:42:14.681262 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:43550.service: Deactivated successfully. May 14 23:42:14.683893 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:42:14.684768 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. May 14 23:42:14.685964 systemd-logind[1486]: Removed session 17. May 14 23:42:16.200241 containerd[1499]: time="2025-05-14T23:42:16.200183731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d\" id:\"e133471337201a41a175699dbf200ab9fdcf5e9f61eca6bfd8247e180878eb38\" pid:5082 exited_at:{seconds:1747266136 nanos:199776397}" May 14 23:42:19.689883 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:50140.service - OpenSSH per-connection server daemon (10.0.0.1:50140). May 14 23:42:19.742691 sshd[5099]: Accepted publickey for core from 10.0.0.1 port 50140 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:19.744928 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:19.750074 systemd-logind[1486]: New session 18 of user core. May 14 23:42:19.760733 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:42:19.886640 sshd[5101]: Connection closed by 10.0.0.1 port 50140 May 14 23:42:19.887041 sshd-session[5099]: pam_unix(sshd:session): session closed for user core May 14 23:42:19.892097 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:50140.service: Deactivated successfully. May 14 23:42:19.895135 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:42:19.895952 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. May 14 23:42:19.897807 systemd-logind[1486]: Removed session 18. May 14 23:42:24.908906 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:50150.service - OpenSSH per-connection server daemon (10.0.0.1:50150). May 14 23:42:24.957905 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 50150 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:24.959634 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:24.963975 systemd-logind[1486]: New session 19 of user core. May 14 23:42:24.970607 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:42:25.092050 sshd[5119]: Connection closed by 10.0.0.1 port 50150 May 14 23:42:25.092501 sshd-session[5117]: pam_unix(sshd:session): session closed for user core May 14 23:42:25.104041 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:50150.service: Deactivated successfully. May 14 23:42:25.107405 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:42:25.112768 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. May 14 23:42:25.114022 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:50154.service - OpenSSH per-connection server daemon (10.0.0.1:50154). May 14 23:42:25.115344 systemd-logind[1486]: Removed session 19. May 14 23:42:25.164942 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 50154 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:25.167003 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:25.171982 systemd-logind[1486]: New session 20 of user core. May 14 23:42:25.181772 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:42:25.851888 sshd[5134]: Connection closed by 10.0.0.1 port 50154 May 14 23:42:25.852393 sshd-session[5131]: pam_unix(sshd:session): session closed for user core May 14 23:42:25.864309 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:50154.service: Deactivated successfully. May 14 23:42:25.867153 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:42:25.869093 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. May 14 23:42:25.870892 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:50166.service - OpenSSH per-connection server daemon (10.0.0.1:50166). May 14 23:42:25.872242 systemd-logind[1486]: Removed session 20. May 14 23:42:25.926731 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 50166 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:25.928947 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:25.935084 systemd-logind[1486]: New session 21 of user core. May 14 23:42:25.949778 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:42:27.613214 sshd[5147]: Connection closed by 10.0.0.1 port 50166 May 14 23:42:27.614889 sshd-session[5144]: pam_unix(sshd:session): session closed for user core May 14 23:42:27.627581 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:50166.service: Deactivated successfully. May 14 23:42:27.629591 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:42:27.629817 systemd[1]: session-21.scope: Consumed 616ms CPU time, 67.2M memory peak. May 14 23:42:27.631008 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. May 14 23:42:27.632738 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:42016.service - OpenSSH per-connection server daemon (10.0.0.1:42016). May 14 23:42:27.633725 systemd-logind[1486]: Removed session 21. May 14 23:42:27.698407 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 42016 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:27.700457 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:27.706138 systemd-logind[1486]: New session 22 of user core. May 14 23:42:27.714783 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:42:27.970002 sshd[5185]: Connection closed by 10.0.0.1 port 42016 May 14 23:42:27.968238 sshd-session[5182]: pam_unix(sshd:session): session closed for user core May 14 23:42:27.981211 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:42016.service: Deactivated successfully. May 14 23:42:27.983578 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:42:27.984446 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. May 14 23:42:27.986910 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:42032.service - OpenSSH per-connection server daemon (10.0.0.1:42032). May 14 23:42:27.987893 systemd-logind[1486]: Removed session 22. May 14 23:42:28.041750 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 42032 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:28.044017 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:28.051217 systemd-logind[1486]: New session 23 of user core. May 14 23:42:28.068784 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:42:28.194323 sshd[5199]: Connection closed by 10.0.0.1 port 42032 May 14 23:42:28.196669 sshd-session[5196]: pam_unix(sshd:session): session closed for user core May 14 23:42:28.201953 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:42032.service: Deactivated successfully. May 14 23:42:28.204658 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:42:28.205858 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. May 14 23:42:28.207317 systemd-logind[1486]: Removed session 23. May 14 23:42:28.492337 kubelet[2727]: E0514 23:42:28.492280 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:30.492350 kubelet[2727]: E0514 23:42:30.492280 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:33.210138 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:42042.service - OpenSSH per-connection server daemon (10.0.0.1:42042). May 14 23:42:33.260825 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 42042 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:33.262622 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:33.267756 systemd-logind[1486]: New session 24 of user core. May 14 23:42:33.277270 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 23:42:33.440615 sshd[5214]: Connection closed by 10.0.0.1 port 42042 May 14 23:42:33.440978 sshd-session[5212]: pam_unix(sshd:session): session closed for user core May 14 23:42:33.445189 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:42042.service: Deactivated successfully. May 14 23:42:33.447510 systemd[1]: session-24.scope: Deactivated successfully. May 14 23:42:33.448221 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. May 14 23:42:33.449114 systemd-logind[1486]: Removed session 24. May 14 23:42:34.997166 kubelet[2727]: I0514 23:42:34.997100 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:42:37.379130 containerd[1499]: time="2025-05-14T23:42:37.379054220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fdf7b1e4d715ba9a1b5d712b4c4898bd27e8c542b0b6fe9f1a2503ec991fa3c\" id:\"ef29ae80abace3ea7f6fc5436c0741dc7dca84b6ba696a14d51df19de918368f\" pid:5243 exited_at:{seconds:1747266157 nanos:378279028}" May 14 23:42:38.458622 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:39966.service - OpenSSH per-connection server daemon (10.0.0.1:39966). May 14 23:42:38.510137 sshd[5263]: Accepted publickey for core from 10.0.0.1 port 39966 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:38.512314 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:38.517231 systemd-logind[1486]: New session 25 of user core. May 14 23:42:38.522646 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 23:42:38.709638 sshd[5265]: Connection closed by 10.0.0.1 port 39966 May 14 23:42:38.709935 sshd-session[5263]: pam_unix(sshd:session): session closed for user core May 14 23:42:38.714680 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:39966.service: Deactivated successfully. May 14 23:42:38.717144 systemd[1]: session-25.scope: Deactivated successfully. May 14 23:42:38.717918 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. May 14 23:42:38.718949 systemd-logind[1486]: Removed session 25. May 14 23:42:43.722756 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:39968.service - OpenSSH per-connection server daemon (10.0.0.1:39968). May 14 23:42:43.761441 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 39968 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:43.763198 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:43.768412 systemd-logind[1486]: New session 26 of user core. May 14 23:42:43.775619 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 23:42:43.888695 sshd[5284]: Connection closed by 10.0.0.1 port 39968 May 14 23:42:43.889056 sshd-session[5282]: pam_unix(sshd:session): session closed for user core May 14 23:42:43.893125 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:39968.service: Deactivated successfully. May 14 23:42:43.895253 systemd[1]: session-26.scope: Deactivated successfully. May 14 23:42:43.896023 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. May 14 23:42:43.897098 systemd-logind[1486]: Removed session 26. May 14 23:42:46.189944 containerd[1499]: time="2025-05-14T23:42:46.189882678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1494d73e3e94de7bc896e794306d9b345f8909aa07bb79f8fd358cdae078031d\" id:\"fbeaf9e06017c0bad3c8f546476a71b22983af2b2f18151b5c59e146facf7c9c\" pid:5308 exited_at:{seconds:1747266166 nanos:189426499}" May 14 23:42:48.901669 systemd[1]: Started sshd@26-10.0.0.51:22-10.0.0.1:53258.service - OpenSSH per-connection server daemon (10.0.0.1:53258). May 14 23:42:48.953123 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 53258 ssh2: RSA SHA256:zU8ALI8Cnz/YWAfXrwmAAMeOYRyoK5cuVdwyUoNLbA8 May 14 23:42:48.956243 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:48.960751 systemd-logind[1486]: New session 27 of user core. May 14 23:42:48.971654 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 23:42:49.288682 sshd[5321]: Connection closed by 10.0.0.1 port 53258 May 14 23:42:49.336227 sshd-session[5319]: pam_unix(sshd:session): session closed for user core May 14 23:42:49.340816 systemd[1]: sshd@26-10.0.0.51:22-10.0.0.1:53258.service: Deactivated successfully. May 14 23:42:49.342932 systemd[1]: session-27.scope: Deactivated successfully. May 14 23:42:49.343866 systemd-logind[1486]: Session 27 logged out. Waiting for processes to exit. May 14 23:42:49.344830 systemd-logind[1486]: Removed session 27. May 14 23:42:49.492567 kubelet[2727]: E0514 23:42:49.492380 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"