May 14 18:05:54.805286 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:05:54.805313 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:05:54.805324 kernel: BIOS-provided physical RAM map: May 14 18:05:54.805331 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 18:05:54.805337 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 18:05:54.805344 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 18:05:54.805351 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 14 18:05:54.805360 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 14 18:05:54.805366 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 14 18:05:54.805373 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 14 18:05:54.805379 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:05:54.805385 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 18:05:54.805392 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 18:05:54.805398 kernel: NX (Execute Disable) protection: active May 14 18:05:54.805408 kernel: APIC: Static calls initialized May 14 18:05:54.805415 kernel: SMBIOS 2.8 present. May 14 18:05:54.805423 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 14 18:05:54.805429 kernel: DMI: Memory slots populated: 1/1 May 14 18:05:54.805436 kernel: Hypervisor detected: KVM May 14 18:05:54.805443 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 18:05:54.805450 kernel: kvm-clock: using sched offset of 3271776447 cycles May 14 18:05:54.805457 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 18:05:54.805465 kernel: tsc: Detected 2794.748 MHz processor May 14 18:05:54.805472 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:05:54.805481 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:05:54.805488 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 14 18:05:54.805496 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 18:05:54.805503 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:05:54.805510 kernel: Using GB pages for direct mapping May 14 18:05:54.805517 kernel: ACPI: Early table checksum verification disabled May 14 18:05:54.805524 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 14 18:05:54.805531 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:54.805541 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:54.805548 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:54.805555 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 14 18:05:54.805562 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:54.805569 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:54.805576 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:54.805583 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:54.805590 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 14 18:05:54.805602 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 14 18:05:54.805610 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 14 18:05:54.805617 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 14 18:05:54.805624 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 14 18:05:54.805632 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 14 18:05:54.805639 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 14 18:05:54.805648 kernel: No NUMA configuration found May 14 18:05:54.805656 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 14 18:05:54.805663 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 14 18:05:54.805670 kernel: Zone ranges: May 14 18:05:54.805678 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:05:54.805685 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 14 18:05:54.805692 kernel: Normal empty May 14 18:05:54.805699 kernel: Device empty May 14 18:05:54.805706 kernel: Movable zone start for each node May 14 18:05:54.805713 kernel: Early memory node ranges May 14 18:05:54.805723 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 18:05:54.805730 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 14 18:05:54.805737 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 14 18:05:54.805744 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:05:54.805752 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 18:05:54.805759 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 14 18:05:54.805766 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 18:05:54.805773 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 18:05:54.805781 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:05:54.805790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 18:05:54.805797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 18:05:54.805805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:05:54.805812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 18:05:54.805820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 18:05:54.805827 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:05:54.805834 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 18:05:54.805855 kernel: TSC deadline timer available May 14 18:05:54.805862 kernel: CPU topo: Max. logical packages: 1 May 14 18:05:54.805872 kernel: CPU topo: Max. logical dies: 1 May 14 18:05:54.805879 kernel: CPU topo: Max. dies per package: 1 May 14 18:05:54.805886 kernel: CPU topo: Max. threads per core: 1 May 14 18:05:54.805894 kernel: CPU topo: Num. cores per package: 4 May 14 18:05:54.805901 kernel: CPU topo: Num. threads per package: 4 May 14 18:05:54.805908 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 14 18:05:54.805915 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 18:05:54.805923 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 18:05:54.805930 kernel: kvm-guest: setup PV sched yield May 14 18:05:54.805937 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 14 18:05:54.805947 kernel: Booting paravirtualized kernel on KVM May 14 18:05:54.805954 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:05:54.805962 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 18:05:54.805969 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 14 18:05:54.805977 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 14 18:05:54.805984 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 18:05:54.805991 kernel: kvm-guest: PV spinlocks enabled May 14 18:05:54.805998 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 18:05:54.806007 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:05:54.806017 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:05:54.806025 kernel: random: crng init done May 14 18:05:54.806032 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:05:54.806039 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:05:54.806047 kernel: Fallback order for Node 0: 0 May 14 18:05:54.806054 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 14 18:05:54.806061 kernel: Policy zone: DMA32 May 14 18:05:54.806069 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:05:54.806078 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 18:05:54.806086 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:05:54.806093 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:05:54.806100 kernel: Dynamic Preempt: voluntary May 14 18:05:54.806107 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:05:54.806115 kernel: rcu: RCU event tracing is enabled. May 14 18:05:54.806123 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 18:05:54.806130 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:05:54.806138 kernel: Rude variant of Tasks RCU enabled. May 14 18:05:54.806145 kernel: Tracing variant of Tasks RCU enabled. May 14 18:05:54.806154 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:05:54.806162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 18:05:54.806169 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:05:54.806177 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:05:54.806184 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:05:54.806192 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 18:05:54.806199 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:05:54.806221 kernel: Console: colour VGA+ 80x25 May 14 18:05:54.806229 kernel: printk: legacy console [ttyS0] enabled May 14 18:05:54.806236 kernel: ACPI: Core revision 20240827 May 14 18:05:54.806244 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 18:05:54.806254 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:05:54.806262 kernel: x2apic enabled May 14 18:05:54.806269 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:05:54.806277 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 18:05:54.806285 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 18:05:54.806295 kernel: kvm-guest: setup PV IPIs May 14 18:05:54.806302 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 18:05:54.806310 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 14 18:05:54.806318 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 14 18:05:54.806326 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 18:05:54.806333 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 18:05:54.806341 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 18:05:54.806349 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:05:54.806356 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:05:54.806366 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:05:54.806374 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:05:54.806382 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 18:05:54.806389 kernel: RETBleed: Mitigation: untrained return thunk May 14 18:05:54.806397 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 18:05:54.806405 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 18:05:54.806412 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 18:05:54.806421 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 18:05:54.806431 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 18:05:54.806438 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:05:54.806446 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:05:54.806454 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:05:54.806461 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:05:54.806469 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 18:05:54.806477 kernel: Freeing SMP alternatives memory: 32K May 14 18:05:54.806484 kernel: pid_max: default: 32768 minimum: 301 May 14 18:05:54.806492 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:05:54.806502 kernel: landlock: Up and running. May 14 18:05:54.806509 kernel: SELinux: Initializing. May 14 18:05:54.806517 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:05:54.806525 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:05:54.806533 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 18:05:54.806540 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 18:05:54.806548 kernel: ... version: 0 May 14 18:05:54.806555 kernel: ... bit width: 48 May 14 18:05:54.806563 kernel: ... generic registers: 6 May 14 18:05:54.806573 kernel: ... value mask: 0000ffffffffffff May 14 18:05:54.806581 kernel: ... max period: 00007fffffffffff May 14 18:05:54.806588 kernel: ... fixed-purpose events: 0 May 14 18:05:54.806596 kernel: ... event mask: 000000000000003f May 14 18:05:54.806603 kernel: signal: max sigframe size: 1776 May 14 18:05:54.806611 kernel: rcu: Hierarchical SRCU implementation. May 14 18:05:54.806619 kernel: rcu: Max phase no-delay instances is 400. May 14 18:05:54.806626 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:05:54.806634 kernel: smp: Bringing up secondary CPUs ... May 14 18:05:54.806644 kernel: smpboot: x86: Booting SMP configuration: May 14 18:05:54.806651 kernel: .... node #0, CPUs: #1 #2 #3 May 14 18:05:54.806659 kernel: smp: Brought up 1 node, 4 CPUs May 14 18:05:54.806667 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 14 18:05:54.806675 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 136904K reserved, 0K cma-reserved) May 14 18:05:54.806682 kernel: devtmpfs: initialized May 14 18:05:54.806690 kernel: x86/mm: Memory block size: 128MB May 14 18:05:54.806698 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:05:54.806705 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 18:05:54.806715 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:05:54.806723 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:05:54.806731 kernel: audit: initializing netlink subsys (disabled) May 14 18:05:54.806739 kernel: audit: type=2000 audit(1747245951.675:1): state=initialized audit_enabled=0 res=1 May 14 18:05:54.806746 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:05:54.806754 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:05:54.806761 kernel: cpuidle: using governor menu May 14 18:05:54.806769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:05:54.806776 kernel: dca service started, version 1.12.1 May 14 18:05:54.806786 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 14 18:05:54.806794 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 14 18:05:54.806802 kernel: PCI: Using configuration type 1 for base access May 14 18:05:54.806809 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:05:54.806817 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:05:54.806825 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:05:54.806832 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:05:54.806862 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:05:54.806871 kernel: ACPI: Added _OSI(Module Device) May 14 18:05:54.806889 kernel: ACPI: Added _OSI(Processor Device) May 14 18:05:54.806897 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:05:54.806912 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:05:54.806928 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:05:54.806935 kernel: ACPI: Interpreter enabled May 14 18:05:54.806943 kernel: ACPI: PM: (supports S0 S3 S5) May 14 18:05:54.806950 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:05:54.806958 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:05:54.806966 kernel: PCI: Using E820 reservations for host bridge windows May 14 18:05:54.806976 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 18:05:54.806983 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:05:54.807152 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 18:05:54.807280 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 18:05:54.807393 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 18:05:54.807403 kernel: PCI host bridge to bus 0000:00 May 14 18:05:54.807519 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 18:05:54.807630 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 18:05:54.807736 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 18:05:54.807854 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 14 18:05:54.807962 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 18:05:54.808095 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 14 18:05:54.808199 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:05:54.808343 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 14 18:05:54.808473 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 14 18:05:54.808589 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 14 18:05:54.808702 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 14 18:05:54.808815 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 14 18:05:54.808959 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 18:05:54.809116 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:05:54.809283 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 14 18:05:54.809420 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 14 18:05:54.809537 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 14 18:05:54.809661 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:05:54.809777 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 14 18:05:54.809913 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 14 18:05:54.810028 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 14 18:05:54.810155 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:05:54.810279 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 14 18:05:54.810394 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 14 18:05:54.810575 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 14 18:05:54.810695 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 14 18:05:54.810817 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 14 18:05:54.810964 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 18:05:54.811093 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 14 18:05:54.811211 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 14 18:05:54.811338 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 14 18:05:54.811510 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 14 18:05:54.811629 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 14 18:05:54.811640 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 18:05:54.811648 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 18:05:54.811659 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 18:05:54.811667 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 18:05:54.811674 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 18:05:54.811682 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 18:05:54.811689 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 18:05:54.811697 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 18:05:54.811704 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 18:05:54.811712 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 18:05:54.811719 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 18:05:54.811729 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 18:05:54.811736 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 18:05:54.811744 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 18:05:54.811751 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 18:05:54.811759 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 18:05:54.811766 kernel: iommu: Default domain type: Translated May 14 18:05:54.811773 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:05:54.811781 kernel: PCI: Using ACPI for IRQ routing May 14 18:05:54.811789 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 18:05:54.811798 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 18:05:54.811805 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 14 18:05:54.811940 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 18:05:54.812054 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 18:05:54.812168 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 18:05:54.812178 kernel: vgaarb: loaded May 14 18:05:54.812186 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 18:05:54.812193 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 18:05:54.812204 kernel: clocksource: Switched to clocksource kvm-clock May 14 18:05:54.812212 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:05:54.812227 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:05:54.812235 kernel: pnp: PnP ACPI init May 14 18:05:54.812364 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 14 18:05:54.812376 kernel: pnp: PnP ACPI: found 6 devices May 14 18:05:54.812384 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:05:54.812392 kernel: NET: Registered PF_INET protocol family May 14 18:05:54.812402 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:05:54.812410 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 18:05:54.812417 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:05:54.812425 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:05:54.812433 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 18:05:54.812441 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 18:05:54.812448 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:05:54.812456 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:05:54.812464 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:05:54.812473 kernel: NET: Registered PF_XDP protocol family May 14 18:05:54.812577 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 18:05:54.812680 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 18:05:54.812783 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 18:05:54.812901 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 14 18:05:54.813005 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 14 18:05:54.813109 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 14 18:05:54.813119 kernel: PCI: CLS 0 bytes, default 64 May 14 18:05:54.813130 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 14 18:05:54.813138 kernel: Initialise system trusted keyrings May 14 18:05:54.813146 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 18:05:54.813153 kernel: Key type asymmetric registered May 14 18:05:54.813161 kernel: Asymmetric key parser 'x509' registered May 14 18:05:54.813168 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:05:54.813176 kernel: io scheduler mq-deadline registered May 14 18:05:54.813184 kernel: io scheduler kyber registered May 14 18:05:54.813191 kernel: io scheduler bfq registered May 14 18:05:54.813201 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:05:54.813209 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 18:05:54.813224 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 18:05:54.813232 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 18:05:54.813240 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:05:54.813247 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:05:54.813255 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 18:05:54.813263 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 18:05:54.813270 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 18:05:54.813393 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 18:05:54.813405 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 18:05:54.813511 kernel: rtc_cmos 00:04: registered as rtc0 May 14 18:05:54.813618 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T18:05:54 UTC (1747245954) May 14 18:05:54.813725 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 14 18:05:54.813735 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 18:05:54.813743 kernel: NET: Registered PF_INET6 protocol family May 14 18:05:54.813751 kernel: Segment Routing with IPv6 May 14 18:05:54.813761 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:05:54.813769 kernel: NET: Registered PF_PACKET protocol family May 14 18:05:54.813776 kernel: Key type dns_resolver registered May 14 18:05:54.813784 kernel: IPI shorthand broadcast: enabled May 14 18:05:54.813792 kernel: sched_clock: Marking stable (2731002086, 111900237)->(2858483458, -15581135) May 14 18:05:54.813799 kernel: registered taskstats version 1 May 14 18:05:54.813807 kernel: Loading compiled-in X.509 certificates May 14 18:05:54.813815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:05:54.813823 kernel: Demotion targets for Node 0: null May 14 18:05:54.813832 kernel: Key type .fscrypt registered May 14 18:05:54.813857 kernel: Key type fscrypt-provisioning registered May 14 18:05:54.813874 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:05:54.813889 kernel: ima: Allocated hash algorithm: sha1 May 14 18:05:54.813897 kernel: ima: No architecture policies found May 14 18:05:54.813904 kernel: clk: Disabling unused clocks May 14 18:05:54.813912 kernel: Warning: unable to open an initial console. May 14 18:05:54.813920 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:05:54.813927 kernel: Write protecting the kernel read-only data: 24576k May 14 18:05:54.813938 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:05:54.813945 kernel: Run /init as init process May 14 18:05:54.813953 kernel: with arguments: May 14 18:05:54.813960 kernel: /init May 14 18:05:54.813968 kernel: with environment: May 14 18:05:54.813975 kernel: HOME=/ May 14 18:05:54.813983 kernel: TERM=linux May 14 18:05:54.813990 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:05:54.813999 systemd[1]: Successfully made /usr/ read-only. May 14 18:05:54.814019 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:05:54.814029 systemd[1]: Detected virtualization kvm. May 14 18:05:54.814038 systemd[1]: Detected architecture x86-64. May 14 18:05:54.814046 systemd[1]: Running in initrd. May 14 18:05:54.814054 systemd[1]: No hostname configured, using default hostname. May 14 18:05:54.814064 systemd[1]: Hostname set to . May 14 18:05:54.814072 systemd[1]: Initializing machine ID from VM UUID. May 14 18:05:54.814081 systemd[1]: Queued start job for default target initrd.target. May 14 18:05:54.814089 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:05:54.814098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:05:54.814107 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:05:54.814115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:05:54.814124 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:05:54.814135 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:05:54.814144 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:05:54.814153 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:05:54.814162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:05:54.814170 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:05:54.814178 systemd[1]: Reached target paths.target - Path Units. May 14 18:05:54.814186 systemd[1]: Reached target slices.target - Slice Units. May 14 18:05:54.814197 systemd[1]: Reached target swap.target - Swaps. May 14 18:05:54.814205 systemd[1]: Reached target timers.target - Timer Units. May 14 18:05:54.814220 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:05:54.814228 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:05:54.814237 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:05:54.814245 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:05:54.814253 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:05:54.814262 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:05:54.814274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:05:54.814282 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:05:54.814290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:05:54.814299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:05:54.814309 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:05:54.814318 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:05:54.814328 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:05:54.814337 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:05:54.814345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:05:54.814353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:54.814362 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:05:54.814373 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:05:54.814381 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:05:54.814407 systemd-journald[220]: Collecting audit messages is disabled. May 14 18:05:54.814429 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:05:54.814438 systemd-journald[220]: Journal started May 14 18:05:54.814456 systemd-journald[220]: Runtime Journal (/run/log/journal/ed03b11e17954531bc0ebd8ad9d501e9) is 6M, max 48.6M, 42.5M free. May 14 18:05:54.809348 systemd-modules-load[221]: Inserted module 'overlay' May 14 18:05:54.848700 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:05:54.848717 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:05:54.848735 kernel: Bridge firewalling registered May 14 18:05:54.835781 systemd-modules-load[221]: Inserted module 'br_netfilter' May 14 18:05:54.851299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:05:54.853780 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:54.856318 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:05:54.862570 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:05:54.866006 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:05:54.866756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:05:54.874480 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:05:54.882999 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:05:54.883391 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:05:54.884331 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:05:54.888122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:05:54.890287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:05:54.904062 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:05:54.906258 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:05:54.928525 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:05:54.939962 systemd-resolved[252]: Positive Trust Anchors: May 14 18:05:54.939976 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:05:54.940006 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:05:54.942393 systemd-resolved[252]: Defaulting to hostname 'linux'. May 14 18:05:54.943431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:05:54.949620 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:05:55.035878 kernel: SCSI subsystem initialized May 14 18:05:55.045872 kernel: Loading iSCSI transport class v2.0-870. May 14 18:05:55.056871 kernel: iscsi: registered transport (tcp) May 14 18:05:55.077876 kernel: iscsi: registered transport (qla4xxx) May 14 18:05:55.077908 kernel: QLogic iSCSI HBA Driver May 14 18:05:55.099052 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:05:55.125126 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:05:55.126198 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:05:55.183159 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:05:55.186583 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:05:55.242872 kernel: raid6: avx2x4 gen() 30404 MB/s May 14 18:05:55.259866 kernel: raid6: avx2x2 gen() 30840 MB/s May 14 18:05:55.276943 kernel: raid6: avx2x1 gen() 25855 MB/s May 14 18:05:55.276957 kernel: raid6: using algorithm avx2x2 gen() 30840 MB/s May 14 18:05:55.294947 kernel: raid6: .... xor() 19960 MB/s, rmw enabled May 14 18:05:55.294980 kernel: raid6: using avx2x2 recovery algorithm May 14 18:05:55.314869 kernel: xor: automatically using best checksumming function avx May 14 18:05:55.476875 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:05:55.485636 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:05:55.487347 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:05:55.526095 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 14 18:05:55.533676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:05:55.535664 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:05:55.562980 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation May 14 18:05:55.589579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:05:55.590957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:05:55.663827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:05:55.668951 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:05:55.705872 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 18:05:55.732373 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 18:05:55.732747 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:05:55.732764 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:05:55.732775 kernel: GPT:9289727 != 19775487 May 14 18:05:55.732785 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:05:55.732795 kernel: GPT:9289727 != 19775487 May 14 18:05:55.732805 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:05:55.732815 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:05:55.731971 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:05:55.732087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:55.733886 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:55.743099 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 18:05:55.741484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:55.747871 kernel: AES CTR mode by8 optimization enabled May 14 18:05:55.747893 kernel: libata version 3.00 loaded. May 14 18:05:55.759301 kernel: ahci 0000:00:1f.2: version 3.0 May 14 18:05:55.783253 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 18:05:55.783270 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 14 18:05:55.783429 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 14 18:05:55.783562 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 18:05:55.783694 kernel: scsi host0: ahci May 14 18:05:55.783855 kernel: scsi host1: ahci May 14 18:05:55.784001 kernel: scsi host2: ahci May 14 18:05:55.784133 kernel: scsi host3: ahci May 14 18:05:55.784283 kernel: scsi host4: ahci May 14 18:05:55.784416 kernel: scsi host5: ahci May 14 18:05:55.784547 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 14 18:05:55.784559 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 14 18:05:55.784569 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 14 18:05:55.784578 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 14 18:05:55.784588 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 14 18:05:55.784598 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 14 18:05:55.794114 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:05:55.826524 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:05:55.829113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:55.842340 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:05:55.842418 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:05:55.853098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:05:55.855115 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:05:55.883083 disk-uuid[632]: Primary Header is updated. May 14 18:05:55.883083 disk-uuid[632]: Secondary Entries is updated. May 14 18:05:55.883083 disk-uuid[632]: Secondary Header is updated. May 14 18:05:55.886865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:05:55.890859 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:05:56.092880 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 18:05:56.092946 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 18:05:56.092957 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 18:05:56.093870 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 18:05:56.093885 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 18:05:56.094871 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 18:05:56.095869 kernel: ata3.00: applying bridge limits May 14 18:05:56.095885 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 18:05:56.096875 kernel: ata3.00: configured for UDMA/100 May 14 18:05:56.097873 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 18:05:56.145438 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 18:05:56.165554 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 18:05:56.165575 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 18:05:56.604719 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:05:56.607456 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:05:56.609942 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:05:56.612234 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:05:56.615072 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:05:56.638385 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:05:56.892878 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:05:56.893241 disk-uuid[633]: The operation has completed successfully. May 14 18:05:56.919062 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:05:56.919192 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:05:56.959106 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:05:56.971323 sh[661]: Success May 14 18:05:56.990566 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:05:56.990616 kernel: device-mapper: uevent: version 1.0.3 May 14 18:05:56.990628 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:05:56.998871 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 14 18:05:57.028780 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:05:57.032564 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:05:57.048173 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:05:57.055476 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:05:57.055504 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (673) May 14 18:05:57.056828 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:05:57.058406 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:05:57.058427 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:05:57.062911 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:05:57.065134 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:05:57.067424 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:05:57.070067 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:05:57.072218 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:05:57.102864 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (706) May 14 18:05:57.105015 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:57.105038 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:05:57.105049 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:05:57.112802 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:05:57.114891 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:57.115229 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:05:57.190882 ignition[747]: Ignition 2.21.0 May 14 18:05:57.190896 ignition[747]: Stage: fetch-offline May 14 18:05:57.190930 ignition[747]: no configs at "/usr/lib/ignition/base.d" May 14 18:05:57.190940 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:05:57.191021 ignition[747]: parsed url from cmdline: "" May 14 18:05:57.191025 ignition[747]: no config URL provided May 14 18:05:57.191029 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:05:57.191039 ignition[747]: no config at "/usr/lib/ignition/user.ign" May 14 18:05:57.191062 ignition[747]: op(1): [started] loading QEMU firmware config module May 14 18:05:57.191067 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 18:05:57.201800 ignition[747]: op(1): [finished] loading QEMU firmware config module May 14 18:05:57.204375 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:05:57.207519 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:05:57.244633 ignition[747]: parsing config with SHA512: 32426bc24715d6f0dc23f480de96d9f7716ff99b3d3c290c76b73c7da9a9fd84780ee741b5653f8bfa8d155afae2a30dd31443c4a494359566dc476ae11cf622 May 14 18:05:57.245707 systemd-networkd[851]: lo: Link UP May 14 18:05:57.245719 systemd-networkd[851]: lo: Gained carrier May 14 18:05:57.247231 systemd-networkd[851]: Enumeration completed May 14 18:05:57.247435 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:05:57.247570 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:05:57.247574 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:05:57.248505 systemd-networkd[851]: eth0: Link UP May 14 18:05:57.248508 systemd-networkd[851]: eth0: Gained carrier May 14 18:05:57.248516 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:05:57.250054 systemd[1]: Reached target network.target - Network. May 14 18:05:57.260869 unknown[747]: fetched base config from "system" May 14 18:05:57.260881 unknown[747]: fetched user config from "qemu" May 14 18:05:57.262689 ignition[747]: fetch-offline: fetch-offline passed May 14 18:05:57.263541 ignition[747]: Ignition finished successfully May 14 18:05:57.266509 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:05:57.268961 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 18:05:57.269893 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:05:57.272426 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:05:57.309702 ignition[856]: Ignition 2.21.0 May 14 18:05:57.309716 ignition[856]: Stage: kargs May 14 18:05:57.309866 ignition[856]: no configs at "/usr/lib/ignition/base.d" May 14 18:05:57.309877 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:05:57.311927 ignition[856]: kargs: kargs passed May 14 18:05:57.311993 ignition[856]: Ignition finished successfully May 14 18:05:57.316394 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:05:57.319428 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:05:57.356865 ignition[865]: Ignition 2.21.0 May 14 18:05:57.356876 ignition[865]: Stage: disks May 14 18:05:57.356978 ignition[865]: no configs at "/usr/lib/ignition/base.d" May 14 18:05:57.356989 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:05:57.358237 ignition[865]: disks: disks passed May 14 18:05:57.360931 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:05:57.358283 ignition[865]: Ignition finished successfully May 14 18:05:57.361762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:05:57.363436 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:05:57.363773 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:05:57.364288 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:05:57.364608 systemd[1]: Reached target basic.target - Basic System. May 14 18:05:57.366085 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:05:57.389255 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:05:57.397261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:05:57.399363 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:05:57.502868 kernel: EXT4-fs (vda9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:05:57.503309 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:05:57.504535 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:05:57.507368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:05:57.508173 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:05:57.511080 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 18:05:57.511139 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:05:57.512890 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:05:57.524749 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:05:57.527888 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (883) May 14 18:05:57.527946 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:05:57.532782 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:57.532805 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:05:57.532822 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:05:57.536648 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:05:57.563855 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:05:57.567912 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory May 14 18:05:57.572780 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:05:57.577198 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:05:57.658077 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:05:57.660154 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:05:57.661698 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:05:57.681870 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:57.693482 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:05:57.704743 ignition[996]: INFO : Ignition 2.21.0 May 14 18:05:57.704743 ignition[996]: INFO : Stage: mount May 14 18:05:57.706563 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:05:57.706563 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:05:57.708732 ignition[996]: INFO : mount: mount passed May 14 18:05:57.708732 ignition[996]: INFO : Ignition finished successfully May 14 18:05:57.709336 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:05:57.711530 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:05:58.054656 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:05:58.056307 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:05:58.090461 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1009) May 14 18:05:58.090497 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:58.090509 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:05:58.091361 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:05:58.095052 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:05:58.120929 ignition[1026]: INFO : Ignition 2.21.0 May 14 18:05:58.120929 ignition[1026]: INFO : Stage: files May 14 18:05:58.123122 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:05:58.123122 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:05:58.125768 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping May 14 18:05:58.125768 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:05:58.125768 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:05:58.130284 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:05:58.130284 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:05:58.130284 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:05:58.130284 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:05:58.130284 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 18:05:58.126838 unknown[1026]: wrote ssh authorized keys file for user: core May 14 18:05:58.201471 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:05:58.485973 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:05:58.485973 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 18:05:58.490166 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:05:58.490166 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:05:58.490166 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:05:58.490166 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:05:58.490166 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:05:58.490166 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:05:58.490166 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:05:58.502568 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:05:58.502568 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:05:58.502568 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:05:58.502568 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:05:58.502568 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:05:58.502568 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 18:05:58.944754 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 18:05:59.133057 systemd-networkd[851]: eth0: Gained IPv6LL May 14 18:05:59.335809 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:05:59.335809 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 18:05:59.339554 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:05:59.343230 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:05:59.343230 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 18:05:59.343230 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 18:05:59.348143 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:05:59.348143 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:05:59.348143 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 18:05:59.348143 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 18:05:59.363837 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:05:59.368138 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:05:59.369724 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 18:05:59.369724 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 18:05:59.369724 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:05:59.369724 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:05:59.369724 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:05:59.369724 ignition[1026]: INFO : files: files passed May 14 18:05:59.369724 ignition[1026]: INFO : Ignition finished successfully May 14 18:05:59.376827 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:05:59.379639 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:05:59.382580 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:05:59.403004 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:05:59.403149 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:05:59.404342 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory May 14 18:05:59.410237 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:05:59.410237 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:05:59.414716 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:05:59.413072 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:05:59.414970 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:05:59.418150 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:05:59.469310 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:05:59.469439 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:05:59.471750 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:05:59.472853 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:05:59.474778 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:05:59.476774 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:05:59.517743 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:05:59.520330 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:05:59.543205 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:05:59.545495 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:05:59.545656 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:05:59.547859 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:05:59.547989 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:05:59.552657 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:05:59.552793 systemd[1]: Stopped target basic.target - Basic System. May 14 18:05:59.554707 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:05:59.555216 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:05:59.555553 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:05:59.555900 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:05:59.556395 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:05:59.556728 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:05:59.557251 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:05:59.557579 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:05:59.557925 systemd[1]: Stopped target swap.target - Swaps. May 14 18:05:59.558398 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:05:59.558506 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:05:59.573584 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:05:59.573945 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:05:59.574402 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:05:59.574510 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:05:59.580090 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:05:59.580206 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:05:59.582481 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:05:59.582586 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:05:59.585385 systemd[1]: Stopped target paths.target - Path Units. May 14 18:05:59.585632 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:05:59.592907 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:05:59.593059 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:05:59.595624 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:05:59.597309 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:05:59.597397 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:05:59.599080 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:05:59.599201 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:05:59.600764 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:05:59.600929 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:05:59.602576 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:05:59.602678 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:05:59.606394 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:05:59.608104 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:05:59.609461 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:05:59.609615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:05:59.611261 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:05:59.611369 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:05:59.618768 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:05:59.622979 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:05:59.639862 ignition[1081]: INFO : Ignition 2.21.0 May 14 18:05:59.639862 ignition[1081]: INFO : Stage: umount May 14 18:05:59.641709 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:05:59.641709 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:05:59.644044 ignition[1081]: INFO : umount: umount passed May 14 18:05:59.644044 ignition[1081]: INFO : Ignition finished successfully May 14 18:05:59.644422 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:05:59.647711 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:05:59.647887 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:05:59.650148 systemd[1]: Stopped target network.target - Network. May 14 18:05:59.651255 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:05:59.651304 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:05:59.653293 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:05:59.653343 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:05:59.655249 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:05:59.655299 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:05:59.655580 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:05:59.655618 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:05:59.656214 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:05:59.660707 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:05:59.666259 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:05:59.666403 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:05:59.670610 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:05:59.670892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:05:59.670942 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:05:59.675526 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:05:59.685887 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:05:59.686016 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:05:59.689756 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:05:59.689925 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:05:59.693052 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:05:59.693104 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:05:59.695798 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:05:59.697898 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:05:59.697956 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:05:59.698192 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:05:59.698232 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:05:59.703041 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:05:59.703102 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:05:59.704287 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:05:59.705508 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:05:59.724104 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:05:59.724242 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:05:59.728707 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:05:59.728913 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:05:59.731180 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:05:59.731225 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:05:59.733289 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:05:59.733326 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:05:59.734347 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:05:59.734396 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:05:59.738082 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:05:59.738144 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:05:59.741052 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:05:59.741112 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:05:59.745289 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:05:59.745680 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:05:59.745731 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:05:59.750097 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:05:59.750146 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:05:59.754599 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 18:05:59.754649 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:05:59.757988 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:05:59.758036 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:05:59.759130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:05:59.759173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:59.776723 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:05:59.776836 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:05:59.820387 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:05:59.820540 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:05:59.822020 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:05:59.824163 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:05:59.824232 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:05:59.828215 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:05:59.859786 systemd[1]: Switching root. May 14 18:05:59.900102 systemd-journald[220]: Journal stopped May 14 18:06:01.038947 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 14 18:06:01.039010 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:06:01.039031 kernel: SELinux: policy capability open_perms=1 May 14 18:06:01.039045 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:06:01.039064 kernel: SELinux: policy capability always_check_network=0 May 14 18:06:01.039076 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:06:01.039087 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:06:01.039098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:06:01.039114 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:06:01.039125 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:06:01.039138 kernel: audit: type=1403 audit(1747245960.272:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:06:01.039161 systemd[1]: Successfully loaded SELinux policy in 46.709ms. May 14 18:06:01.039186 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.181ms. May 14 18:06:01.039199 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:06:01.039212 systemd[1]: Detected virtualization kvm. May 14 18:06:01.039224 systemd[1]: Detected architecture x86-64. May 14 18:06:01.039236 systemd[1]: Detected first boot. May 14 18:06:01.039248 systemd[1]: Initializing machine ID from VM UUID. May 14 18:06:01.039259 zram_generator::config[1127]: No configuration found. May 14 18:06:01.039272 kernel: Guest personality initialized and is inactive May 14 18:06:01.039285 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 18:06:01.039297 kernel: Initialized host personality May 14 18:06:01.039308 kernel: NET: Registered PF_VSOCK protocol family May 14 18:06:01.039319 systemd[1]: Populated /etc with preset unit settings. May 14 18:06:01.039331 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:06:01.039349 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:06:01.039360 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:06:01.039372 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:06:01.039384 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:06:01.039398 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:06:01.039409 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:06:01.039425 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:06:01.039437 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:06:01.039449 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:06:01.039461 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:06:01.039473 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:06:01.039484 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:06:01.039497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:06:01.039511 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:06:01.039523 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:06:01.039535 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:06:01.039547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:06:01.039558 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:06:01.039571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:06:01.039582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:06:01.039596 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:06:01.039608 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:06:01.039620 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:06:01.039632 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:06:01.039644 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:06:01.039656 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:06:01.039667 systemd[1]: Reached target slices.target - Slice Units. May 14 18:06:01.039680 systemd[1]: Reached target swap.target - Swaps. May 14 18:06:01.039691 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:06:01.039703 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:06:01.039718 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:06:01.039730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:06:01.039741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:06:01.039753 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:06:01.039765 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:06:01.039777 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:06:01.039789 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:06:01.039800 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:06:01.039812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:06:01.039826 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:06:01.039838 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:06:01.039863 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:06:01.039876 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:06:01.039887 systemd[1]: Reached target machines.target - Containers. May 14 18:06:01.039899 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:06:01.039911 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:06:01.039922 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:06:01.039937 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:06:01.039950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:06:01.039962 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:06:01.039974 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:06:01.039986 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:06:01.039998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:06:01.040010 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:06:01.040022 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:06:01.040037 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:06:01.040049 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:06:01.040068 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:06:01.040081 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:06:01.040094 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:06:01.040105 kernel: fuse: init (API version 7.41) May 14 18:06:01.040117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:06:01.040129 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:06:01.040140 kernel: loop: module loaded May 14 18:06:01.040154 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:06:01.040166 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:06:01.040178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:06:01.040190 kernel: ACPI: bus type drm_connector registered May 14 18:06:01.040202 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:06:01.040217 systemd[1]: Stopped verity-setup.service. May 14 18:06:01.040229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:06:01.040241 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:06:01.040253 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:06:01.040265 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:06:01.040295 systemd-journald[1202]: Collecting audit messages is disabled. May 14 18:06:01.040317 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:06:01.040333 systemd-journald[1202]: Journal started May 14 18:06:01.040357 systemd-journald[1202]: Runtime Journal (/run/log/journal/ed03b11e17954531bc0ebd8ad9d501e9) is 6M, max 48.6M, 42.5M free. May 14 18:06:00.795698 systemd[1]: Queued start job for default target multi-user.target. May 14 18:06:00.810647 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:06:00.811105 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:06:01.041863 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:06:01.043375 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:06:01.044606 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:06:01.045927 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:06:01.047418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:06:01.048965 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:06:01.049190 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:06:01.050779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:06:01.051093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:06:01.052579 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:06:01.052795 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:06:01.054158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:06:01.054371 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:06:01.055893 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:06:01.056113 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:06:01.057467 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:06:01.057676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:06:01.059103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:06:01.060521 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:06:01.062104 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:06:01.063654 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:06:01.079560 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:06:01.082250 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:06:01.085958 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:06:01.087210 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:06:01.087243 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:06:01.089316 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:06:01.093942 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:06:01.095141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:06:01.096462 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:06:01.098970 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:06:01.100182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:06:01.101279 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:06:01.102563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:06:01.105976 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:06:01.109099 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:06:01.114134 systemd-journald[1202]: Time spent on flushing to /var/log/journal/ed03b11e17954531bc0ebd8ad9d501e9 is 14.512ms for 975 entries. May 14 18:06:01.114134 systemd-journald[1202]: System Journal (/var/log/journal/ed03b11e17954531bc0ebd8ad9d501e9) is 8M, max 195.6M, 187.6M free. May 14 18:06:01.148801 systemd-journald[1202]: Received client request to flush runtime journal. May 14 18:06:01.148871 kernel: loop0: detected capacity change from 0 to 205544 May 14 18:06:01.112011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:06:01.116086 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:06:01.116418 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:06:01.119575 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:06:01.132645 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:06:01.135315 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:06:01.138462 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:06:01.142499 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:06:01.151412 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:06:01.151652 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 14 18:06:01.151664 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 14 18:06:01.161335 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:06:01.165007 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:06:01.173899 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:06:01.175359 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:06:01.196966 kernel: loop1: detected capacity change from 0 to 146240 May 14 18:06:01.203494 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:06:01.208332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:06:01.226078 kernel: loop2: detected capacity change from 0 to 113872 May 14 18:06:01.235535 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 14 18:06:01.235557 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 14 18:06:01.242041 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:06:01.270873 kernel: loop3: detected capacity change from 0 to 205544 May 14 18:06:01.278888 kernel: loop4: detected capacity change from 0 to 146240 May 14 18:06:01.289877 kernel: loop5: detected capacity change from 0 to 113872 May 14 18:06:01.298828 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 18:06:01.299402 (sd-merge)[1272]: Merged extensions into '/usr'. May 14 18:06:01.303737 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:06:01.303758 systemd[1]: Reloading... May 14 18:06:01.365874 zram_generator::config[1302]: No configuration found. May 14 18:06:01.429965 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:06:01.467019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:06:01.554700 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:06:01.554923 systemd[1]: Reloading finished in 250 ms. May 14 18:06:01.584341 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:06:01.585928 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:06:01.606504 systemd[1]: Starting ensure-sysext.service... May 14 18:06:01.608429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:06:01.629216 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:06:01.629253 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:06:01.629609 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:06:01.629892 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:06:01.630764 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:06:01.631053 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 14 18:06:01.631129 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 14 18:06:01.635242 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:06:01.635256 systemd-tmpfiles[1337]: Skipping /boot May 14 18:06:01.635454 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 14 18:06:01.635469 systemd[1]: Reloading... May 14 18:06:01.647067 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:06:01.647081 systemd-tmpfiles[1337]: Skipping /boot May 14 18:06:01.687874 zram_generator::config[1370]: No configuration found. May 14 18:06:01.770966 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:06:01.850255 systemd[1]: Reloading finished in 214 ms. May 14 18:06:01.875360 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:06:01.893727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:06:01.903304 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:06:01.905625 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:06:01.928256 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:06:01.931973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:06:01.934674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:06:01.939049 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:06:01.942828 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:06:01.943058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:06:01.948790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:06:01.953010 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:06:01.955564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:06:01.956778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:06:01.956883 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:06:01.959019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:06:01.960233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:06:01.961755 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:06:01.965112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:06:01.965333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:06:01.967099 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:06:01.967492 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:06:01.973376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:06:01.974359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:06:01.982414 systemd-udevd[1408]: Using default interface naming scheme 'v255'. May 14 18:06:01.984313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:06:01.984527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:06:01.989165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:06:01.993017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:06:01.993802 augenrules[1438]: No rules May 14 18:06:01.995245 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:06:01.996591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:06:01.996721 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:06:02.002762 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:06:02.004901 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:06:02.006576 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:06:02.006825 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:06:02.009770 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:06:02.012107 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:06:02.013903 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:06:02.015746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:06:02.015987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:06:02.017547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:06:02.019381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:06:02.019602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:06:02.021467 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:06:02.021682 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:06:02.023547 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:06:02.046103 systemd[1]: Finished ensure-sysext.service. May 14 18:06:02.048514 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:06:02.049953 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:06:02.051125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:06:02.053092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:06:02.057122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:06:02.060361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:06:02.065995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:06:02.067749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:06:02.067789 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:06:02.072117 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:06:02.077037 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:06:02.078178 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:06:02.078208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:06:02.079011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:06:02.079259 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:06:02.081375 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:06:02.081596 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:06:02.083082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:06:02.083309 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:06:02.084835 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:06:02.085080 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:06:02.094966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:06:02.095026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:06:02.115152 augenrules[1485]: /sbin/augenrules: No change May 14 18:06:02.133173 augenrules[1517]: No rules May 14 18:06:02.134394 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:06:02.136073 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:06:02.147452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:06:02.152347 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:06:02.154023 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:06:02.167916 systemd-resolved[1406]: Positive Trust Anchors: May 14 18:06:02.168208 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:06:02.168282 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:06:02.171877 systemd-resolved[1406]: Defaulting to hostname 'linux'. May 14 18:06:02.173664 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:06:02.175521 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:06:02.177045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:06:02.193874 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 18:06:02.195874 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:06:02.199920 kernel: ACPI: button: Power Button [PWRF] May 14 18:06:02.249885 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 18:06:02.250461 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 18:06:02.250200 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:06:02.251788 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:06:02.252973 systemd-networkd[1490]: lo: Link UP May 14 18:06:02.252985 systemd-networkd[1490]: lo: Gained carrier May 14 18:06:02.254009 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:06:02.254580 systemd-networkd[1490]: Enumeration completed May 14 18:06:02.255297 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:06:02.256581 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:06:02.256987 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:06:02.257000 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:06:02.257649 systemd-networkd[1490]: eth0: Link UP May 14 18:06:02.257834 systemd-networkd[1490]: eth0: Gained carrier May 14 18:06:02.257868 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:06:02.257899 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:06:02.259399 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:06:02.259433 systemd[1]: Reached target paths.target - Path Units. May 14 18:06:02.260477 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:06:02.261666 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:06:02.262872 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:06:02.264463 systemd[1]: Reached target timers.target - Timer Units. May 14 18:06:02.266242 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:06:02.269061 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:06:02.270882 systemd-networkd[1490]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:06:02.273621 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:06:02.274771 systemd-timesyncd[1492]: Network configuration changed, trying to establish connection. May 14 18:06:02.275091 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:06:02.275856 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 18:06:02.275900 systemd-timesyncd[1492]: Initial clock synchronization to Wed 2025-05-14 18:06:02.189996 UTC. May 14 18:06:02.276495 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:06:02.291328 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:06:02.293593 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:06:02.295473 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:06:02.297132 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:06:02.298922 systemd[1]: Reached target network.target - Network. May 14 18:06:02.299910 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:06:02.300900 systemd[1]: Reached target basic.target - Basic System. May 14 18:06:02.301931 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:06:02.301964 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:06:02.302980 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:06:02.306050 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:06:02.309103 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:06:02.312050 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:06:02.319002 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:06:02.320223 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:06:02.322121 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:06:02.325745 jq[1551]: false May 14 18:06:02.333785 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:06:02.338596 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:06:02.340860 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:06:02.344085 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache May 14 18:06:02.344100 oslogin_cache_refresh[1554]: Refreshing passwd entry cache May 14 18:06:02.352865 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting May 14 18:06:02.352865 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:06:02.352865 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache May 14 18:06:02.352948 extend-filesystems[1552]: Found loop3 May 14 18:06:02.352948 extend-filesystems[1552]: Found loop4 May 14 18:06:02.352948 extend-filesystems[1552]: Found loop5 May 14 18:06:02.352948 extend-filesystems[1552]: Found sr0 May 14 18:06:02.352948 extend-filesystems[1552]: Found vda May 14 18:06:02.352948 extend-filesystems[1552]: Found vda1 May 14 18:06:02.352948 extend-filesystems[1552]: Found vda2 May 14 18:06:02.352948 extend-filesystems[1552]: Found vda3 May 14 18:06:02.352948 extend-filesystems[1552]: Found usr May 14 18:06:02.352948 extend-filesystems[1552]: Found vda4 May 14 18:06:02.352948 extend-filesystems[1552]: Found vda6 May 14 18:06:02.352948 extend-filesystems[1552]: Found vda7 May 14 18:06:02.352948 extend-filesystems[1552]: Found vda9 May 14 18:06:02.352948 extend-filesystems[1552]: Checking size of /dev/vda9 May 14 18:06:02.352053 oslogin_cache_refresh[1554]: Failure getting users, quitting May 14 18:06:02.364219 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting May 14 18:06:02.364219 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:06:02.352074 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:06:02.352131 oslogin_cache_refresh[1554]: Refreshing group entry cache May 14 18:06:02.361242 oslogin_cache_refresh[1554]: Failure getting groups, quitting May 14 18:06:02.361252 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:06:02.370029 extend-filesystems[1552]: Resized partition /dev/vda9 May 14 18:06:02.372342 extend-filesystems[1565]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:06:02.376874 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 18:06:02.380188 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:06:02.386547 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:06:02.390968 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:06:02.393196 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:06:02.396236 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:06:02.397005 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:06:02.397791 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:06:02.400945 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 18:06:02.407098 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:06:02.412361 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:06:02.414305 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:06:02.427982 jq[1574]: true May 14 18:06:02.414694 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:06:02.415228 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:06:02.415604 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:06:02.419261 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:06:02.419492 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:06:02.422312 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:06:02.422678 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:06:02.431017 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:06:02.431017 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 18:06:02.431017 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 18:06:02.434813 extend-filesystems[1552]: Resized filesystem in /dev/vda9 May 14 18:06:02.435070 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:06:02.435350 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:06:02.440276 update_engine[1571]: I20250514 18:06:02.440218 1571 main.cc:92] Flatcar Update Engine starting May 14 18:06:02.445335 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:06:02.445781 jq[1581]: true May 14 18:06:02.469254 kernel: kvm_amd: TSC scaling supported May 14 18:06:02.469301 kernel: kvm_amd: Nested Virtualization enabled May 14 18:06:02.469339 kernel: kvm_amd: Nested Paging enabled May 14 18:06:02.469351 kernel: kvm_amd: LBR virtualization supported May 14 18:06:02.469364 tar[1579]: linux-amd64/helm May 14 18:06:02.470972 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 18:06:02.470996 kernel: kvm_amd: Virtual GIF supported May 14 18:06:02.474936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:06:02.487802 dbus-daemon[1549]: [system] SELinux support is enabled May 14 18:06:02.488330 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:06:02.492622 update_engine[1571]: I20250514 18:06:02.492578 1571 update_check_scheduler.cc:74] Next update check in 11m43s May 14 18:06:02.499466 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:06:02.499492 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:06:02.500940 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:06:02.500954 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:06:02.505957 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:06:02.509687 systemd[1]: Started update-engine.service - Update Engine. May 14 18:06:02.517004 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:06:02.558889 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:06:02.599871 kernel: EDAC MC: Ver: 3.0.0 May 14 18:06:02.605006 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:06:02.621593 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:06:02.641293 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:06:02.641682 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:06:02.650635 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:06:02.651122 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (Power Button) May 14 18:06:02.651382 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:06:02.651956 systemd-logind[1566]: New seat seat0. May 14 18:06:02.654237 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:06:02.669984 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:06:02.672241 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:06:02.673707 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:06:02.674145 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:06:02.690805 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:06:02.773207 bash[1615]: Updated "/home/core/.ssh/authorized_keys" May 14 18:06:02.775255 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:06:02.778497 containerd[1584]: time="2025-05-14T18:06:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:06:02.779056 containerd[1584]: time="2025-05-14T18:06:02.779022519Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:06:02.786453 containerd[1584]: time="2025-05-14T18:06:02.786414946Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.755µs" May 14 18:06:02.786453 containerd[1584]: time="2025-05-14T18:06:02.786440864Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:06:02.786520 containerd[1584]: time="2025-05-14T18:06:02.786457415Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:06:02.786633 containerd[1584]: time="2025-05-14T18:06:02.786605273Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:06:02.786633 containerd[1584]: time="2025-05-14T18:06:02.786625160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:06:02.786679 containerd[1584]: time="2025-05-14T18:06:02.786646199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:06:02.786727 containerd[1584]: time="2025-05-14T18:06:02.786705831Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:06:02.786727 containerd[1584]: time="2025-05-14T18:06:02.786720008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:06:02.786996 containerd[1584]: time="2025-05-14T18:06:02.786966059Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:06:02.786996 containerd[1584]: time="2025-05-14T18:06:02.786983712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:06:02.786996 containerd[1584]: time="2025-05-14T18:06:02.786993330Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:06:02.787071 containerd[1584]: time="2025-05-14T18:06:02.787001305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:06:02.787121 containerd[1584]: time="2025-05-14T18:06:02.787101473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:06:02.787339 containerd[1584]: time="2025-05-14T18:06:02.787310325Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:06:02.787367 containerd[1584]: time="2025-05-14T18:06:02.787343758Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:06:02.787367 containerd[1584]: time="2025-05-14T18:06:02.787354748Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:06:02.787405 containerd[1584]: time="2025-05-14T18:06:02.787389013Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:06:02.788553 containerd[1584]: time="2025-05-14T18:06:02.787595129Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:06:02.788553 containerd[1584]: time="2025-05-14T18:06:02.787660622Z" level=info msg="metadata content store policy set" policy=shared May 14 18:06:02.792334 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 18:06:02.793893 containerd[1584]: time="2025-05-14T18:06:02.793828251Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:06:02.793938 containerd[1584]: time="2025-05-14T18:06:02.793926836Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:06:02.793959 containerd[1584]: time="2025-05-14T18:06:02.793945390Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:06:02.794029 containerd[1584]: time="2025-05-14T18:06:02.794006766Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:06:02.794056 containerd[1584]: time="2025-05-14T18:06:02.794032263Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:06:02.794056 containerd[1584]: time="2025-05-14T18:06:02.794044717Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:06:02.794111 containerd[1584]: time="2025-05-14T18:06:02.794061067Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:06:02.794111 containerd[1584]: time="2025-05-14T18:06:02.794083279Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:06:02.794111 containerd[1584]: time="2025-05-14T18:06:02.794094270Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:06:02.794111 containerd[1584]: time="2025-05-14T18:06:02.794104028Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:06:02.794180 containerd[1584]: time="2025-05-14T18:06:02.794113275Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:06:02.794180 containerd[1584]: time="2025-05-14T18:06:02.794128023Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:06:02.794298 containerd[1584]: time="2025-05-14T18:06:02.794267134Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:06:02.794329 containerd[1584]: time="2025-05-14T18:06:02.794300376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:06:02.794329 containerd[1584]: time="2025-05-14T18:06:02.794316406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:06:02.794329 containerd[1584]: time="2025-05-14T18:06:02.794326575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:06:02.794381 containerd[1584]: time="2025-05-14T18:06:02.794339159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:06:02.794381 containerd[1584]: time="2025-05-14T18:06:02.794350160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:06:02.794381 containerd[1584]: time="2025-05-14T18:06:02.794361281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:06:02.794381 containerd[1584]: time="2025-05-14T18:06:02.794372411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:06:02.794514 containerd[1584]: time="2025-05-14T18:06:02.794384284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:06:02.794514 containerd[1584]: time="2025-05-14T18:06:02.794396086Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:06:02.794514 containerd[1584]: time="2025-05-14T18:06:02.794406916Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:06:02.794514 containerd[1584]: time="2025-05-14T18:06:02.794478761Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:06:02.794514 containerd[1584]: time="2025-05-14T18:06:02.794492767Z" level=info msg="Start snapshots syncer" May 14 18:06:02.794604 containerd[1584]: time="2025-05-14T18:06:02.794526510Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:06:02.794799 containerd[1584]: time="2025-05-14T18:06:02.794752975Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:06:02.794925 containerd[1584]: time="2025-05-14T18:06:02.794803811Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:06:02.795636 containerd[1584]: time="2025-05-14T18:06:02.795601507Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:06:02.795756 containerd[1584]: time="2025-05-14T18:06:02.795724608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:06:02.795756 containerd[1584]: time="2025-05-14T18:06:02.795749474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:06:02.795795 containerd[1584]: time="2025-05-14T18:06:02.795759664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:06:02.795795 containerd[1584]: time="2025-05-14T18:06:02.795776705Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:06:02.795795 containerd[1584]: time="2025-05-14T18:06:02.795788297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:06:02.795870 containerd[1584]: time="2025-05-14T18:06:02.795799268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:06:02.795870 containerd[1584]: time="2025-05-14T18:06:02.795814867Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:06:02.795870 containerd[1584]: time="2025-05-14T18:06:02.795835095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:06:02.795870 containerd[1584]: time="2025-05-14T18:06:02.795862907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:06:02.795947 containerd[1584]: time="2025-05-14T18:06:02.795873297Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:06:02.795947 containerd[1584]: time="2025-05-14T18:06:02.795904205Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:06:02.795947 containerd[1584]: time="2025-05-14T18:06:02.795919203Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:06:02.795947 containerd[1584]: time="2025-05-14T18:06:02.795926697Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:06:02.795947 containerd[1584]: time="2025-05-14T18:06:02.795935543Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:06:02.795947 containerd[1584]: time="2025-05-14T18:06:02.795943218Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:06:02.796068 containerd[1584]: time="2025-05-14T18:06:02.795956132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:06:02.796068 containerd[1584]: time="2025-05-14T18:06:02.795966612Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:06:02.796068 containerd[1584]: time="2025-05-14T18:06:02.795982722Z" level=info msg="runtime interface created" May 14 18:06:02.796068 containerd[1584]: time="2025-05-14T18:06:02.795987952Z" level=info msg="created NRI interface" May 14 18:06:02.796068 containerd[1584]: time="2025-05-14T18:06:02.795995877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:06:02.796068 containerd[1584]: time="2025-05-14T18:06:02.796005825Z" level=info msg="Connect containerd service" May 14 18:06:02.796068 containerd[1584]: time="2025-05-14T18:06:02.796043786Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:06:02.799206 containerd[1584]: time="2025-05-14T18:06:02.798626531Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:06:02.806483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:06:02.885350 containerd[1584]: time="2025-05-14T18:06:02.885239091Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:06:02.885350 containerd[1584]: time="2025-05-14T18:06:02.885306788Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:06:02.885350 containerd[1584]: time="2025-05-14T18:06:02.885326285Z" level=info msg="Start subscribing containerd event" May 14 18:06:02.885350 containerd[1584]: time="2025-05-14T18:06:02.885346934Z" level=info msg="Start recovering state" May 14 18:06:02.885510 containerd[1584]: time="2025-05-14T18:06:02.885418257Z" level=info msg="Start event monitor" May 14 18:06:02.885510 containerd[1584]: time="2025-05-14T18:06:02.885431362Z" level=info msg="Start cni network conf syncer for default" May 14 18:06:02.885510 containerd[1584]: time="2025-05-14T18:06:02.885438095Z" level=info msg="Start streaming server" May 14 18:06:02.885510 containerd[1584]: time="2025-05-14T18:06:02.885445979Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:06:02.885510 containerd[1584]: time="2025-05-14T18:06:02.885453293Z" level=info msg="runtime interface starting up..." May 14 18:06:02.885510 containerd[1584]: time="2025-05-14T18:06:02.885458773Z" level=info msg="starting plugins..." May 14 18:06:02.885510 containerd[1584]: time="2025-05-14T18:06:02.885471748Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:06:02.885643 containerd[1584]: time="2025-05-14T18:06:02.885582365Z" level=info msg="containerd successfully booted in 0.107523s" May 14 18:06:02.885701 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:06:02.900461 tar[1579]: linux-amd64/LICENSE May 14 18:06:02.900520 tar[1579]: linux-amd64/README.md May 14 18:06:02.925024 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:06:03.357071 systemd-networkd[1490]: eth0: Gained IPv6LL May 14 18:06:03.360088 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:06:03.361908 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:06:03.364504 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 18:06:03.367007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:06:03.386265 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:06:03.411265 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:06:03.413052 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 18:06:03.413307 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 18:06:03.415662 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:06:03.993022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:06:03.994717 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:06:03.996241 systemd[1]: Startup finished in 2.806s (kernel) + 5.642s (initrd) + 3.768s (userspace) = 12.217s. May 14 18:06:04.003365 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:06:04.400922 kubelet[1691]: E0514 18:06:04.399749 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:06:04.403619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:06:04.403870 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:06:04.404270 systemd[1]: kubelet.service: Consumed 889ms CPU time, 236.1M memory peak. May 14 18:06:07.569398 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:06:07.570747 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:38560.service - OpenSSH per-connection server daemon (10.0.0.1:38560). May 14 18:06:07.646670 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 38560 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:06:07.648636 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:07.655754 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:06:07.656983 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:06:07.664288 systemd-logind[1566]: New session 1 of user core. May 14 18:06:07.682645 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:06:07.686028 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:06:07.703116 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:06:07.705281 systemd-logind[1566]: New session c1 of user core. May 14 18:06:07.853075 systemd[1708]: Queued start job for default target default.target. May 14 18:06:07.870005 systemd[1708]: Created slice app.slice - User Application Slice. May 14 18:06:07.870028 systemd[1708]: Reached target paths.target - Paths. May 14 18:06:07.870064 systemd[1708]: Reached target timers.target - Timers. May 14 18:06:07.871543 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:06:07.883307 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:06:07.883422 systemd[1708]: Reached target sockets.target - Sockets. May 14 18:06:07.883459 systemd[1708]: Reached target basic.target - Basic System. May 14 18:06:07.883496 systemd[1708]: Reached target default.target - Main User Target. May 14 18:06:07.883525 systemd[1708]: Startup finished in 172ms. May 14 18:06:07.884041 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:06:07.885721 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:06:07.954266 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:38564.service - OpenSSH per-connection server daemon (10.0.0.1:38564). May 14 18:06:07.993720 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 38564 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:06:07.995026 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:07.998991 systemd-logind[1566]: New session 2 of user core. May 14 18:06:08.012957 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:06:08.065532 sshd[1721]: Connection closed by 10.0.0.1 port 38564 May 14 18:06:08.065908 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 14 18:06:08.077160 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:38564.service: Deactivated successfully. May 14 18:06:08.078777 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:06:08.079478 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. May 14 18:06:08.082273 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:38570.service - OpenSSH per-connection server daemon (10.0.0.1:38570). May 14 18:06:08.082805 systemd-logind[1566]: Removed session 2. May 14 18:06:08.133700 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 38570 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:06:08.135107 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:08.139053 systemd-logind[1566]: New session 3 of user core. May 14 18:06:08.152965 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:06:08.200543 sshd[1729]: Connection closed by 10.0.0.1 port 38570 May 14 18:06:08.200898 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 14 18:06:08.219462 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:38570.service: Deactivated successfully. May 14 18:06:08.221280 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:06:08.222068 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. May 14 18:06:08.225106 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:38580.service - OpenSSH per-connection server daemon (10.0.0.1:38580). May 14 18:06:08.225661 systemd-logind[1566]: Removed session 3. May 14 18:06:08.284261 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 38580 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:06:08.285566 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:08.289796 systemd-logind[1566]: New session 4 of user core. May 14 18:06:08.300965 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:06:08.353572 sshd[1737]: Connection closed by 10.0.0.1 port 38580 May 14 18:06:08.353951 sshd-session[1735]: pam_unix(sshd:session): session closed for user core May 14 18:06:08.367379 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:38580.service: Deactivated successfully. May 14 18:06:08.369048 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:06:08.369821 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. May 14 18:06:08.372736 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:38584.service - OpenSSH per-connection server daemon (10.0.0.1:38584). May 14 18:06:08.373262 systemd-logind[1566]: Removed session 4. May 14 18:06:08.425651 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 38584 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:06:08.427328 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:08.431393 systemd-logind[1566]: New session 5 of user core. May 14 18:06:08.441958 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:06:08.499126 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:06:08.499443 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:06:08.520029 sudo[1746]: pam_unix(sudo:session): session closed for user root May 14 18:06:08.521530 sshd[1745]: Connection closed by 10.0.0.1 port 38584 May 14 18:06:08.521887 sshd-session[1743]: pam_unix(sshd:session): session closed for user core May 14 18:06:08.540383 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:38584.service: Deactivated successfully. May 14 18:06:08.541983 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:06:08.542611 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. May 14 18:06:08.545196 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:38590.service - OpenSSH per-connection server daemon (10.0.0.1:38590). May 14 18:06:08.545700 systemd-logind[1566]: Removed session 5. May 14 18:06:08.606316 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 38590 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:06:08.607674 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:08.611803 systemd-logind[1566]: New session 6 of user core. May 14 18:06:08.620985 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:06:08.672860 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:06:08.673154 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:06:08.929763 sudo[1756]: pam_unix(sudo:session): session closed for user root May 14 18:06:08.935733 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:06:08.936051 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:06:08.945214 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:06:08.996089 augenrules[1778]: No rules May 14 18:06:08.997753 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:06:08.998049 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:06:08.999059 sudo[1755]: pam_unix(sudo:session): session closed for user root May 14 18:06:09.000459 sshd[1754]: Connection closed by 10.0.0.1 port 38590 May 14 18:06:09.000785 sshd-session[1752]: pam_unix(sshd:session): session closed for user core May 14 18:06:09.009235 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:38590.service: Deactivated successfully. May 14 18:06:09.010749 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:06:09.011497 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. May 14 18:06:09.014432 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:38604.service - OpenSSH per-connection server daemon (10.0.0.1:38604). May 14 18:06:09.015139 systemd-logind[1566]: Removed session 6. May 14 18:06:09.070585 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 38604 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:06:09.071966 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:09.076012 systemd-logind[1566]: New session 7 of user core. May 14 18:06:09.091956 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:06:09.143308 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:06:09.143607 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:06:09.433488 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:06:09.455147 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:06:09.665217 dockerd[1810]: time="2025-05-14T18:06:09.665151751Z" level=info msg="Starting up" May 14 18:06:09.666787 dockerd[1810]: time="2025-05-14T18:06:09.666749931Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:06:09.726905 dockerd[1810]: time="2025-05-14T18:06:09.726800123Z" level=info msg="Loading containers: start." May 14 18:06:09.736869 kernel: Initializing XFRM netlink socket May 14 18:06:09.965472 systemd-networkd[1490]: docker0: Link UP May 14 18:06:09.969603 dockerd[1810]: time="2025-05-14T18:06:09.969569612Z" level=info msg="Loading containers: done." May 14 18:06:09.982130 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3779530931-merged.mount: Deactivated successfully. May 14 18:06:09.983995 dockerd[1810]: time="2025-05-14T18:06:09.983952116Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:06:09.984050 dockerd[1810]: time="2025-05-14T18:06:09.984038943Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:06:09.984165 dockerd[1810]: time="2025-05-14T18:06:09.984142239Z" level=info msg="Initializing buildkit" May 14 18:06:10.011153 dockerd[1810]: time="2025-05-14T18:06:10.011122538Z" level=info msg="Completed buildkit initialization" May 14 18:06:10.016634 dockerd[1810]: time="2025-05-14T18:06:10.016607231Z" level=info msg="Daemon has completed initialization" May 14 18:06:10.016733 dockerd[1810]: time="2025-05-14T18:06:10.016686688Z" level=info msg="API listen on /run/docker.sock" May 14 18:06:10.016807 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:06:10.657003 containerd[1584]: time="2025-05-14T18:06:10.656953004Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 18:06:11.364323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691746526.mount: Deactivated successfully. May 14 18:06:12.209169 containerd[1584]: time="2025-05-14T18:06:12.209100985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:12.209866 containerd[1584]: time="2025-05-14T18:06:12.209835407Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 18:06:12.211074 containerd[1584]: time="2025-05-14T18:06:12.210898030Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:12.213184 containerd[1584]: time="2025-05-14T18:06:12.213153318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:12.214111 containerd[1584]: time="2025-05-14T18:06:12.214042886Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.557041967s" May 14 18:06:12.214111 containerd[1584]: time="2025-05-14T18:06:12.214102378Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 18:06:12.215473 containerd[1584]: time="2025-05-14T18:06:12.215417657Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 18:06:13.258299 containerd[1584]: time="2025-05-14T18:06:13.258233689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:13.258956 containerd[1584]: time="2025-05-14T18:06:13.258903280Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 18:06:13.259971 containerd[1584]: time="2025-05-14T18:06:13.259938308Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:13.262344 containerd[1584]: time="2025-05-14T18:06:13.262303808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:13.263186 containerd[1584]: time="2025-05-14T18:06:13.263156998Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.047698806s" May 14 18:06:13.263221 containerd[1584]: time="2025-05-14T18:06:13.263187179Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 18:06:13.263936 containerd[1584]: time="2025-05-14T18:06:13.263884644Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 18:06:14.429805 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:06:14.431439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:06:14.643980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:06:14.657129 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:06:14.706287 kubelet[2091]: E0514 18:06:14.706149 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:06:14.712419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:06:14.712645 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:06:14.713096 systemd[1]: kubelet.service: Consumed 191ms CPU time, 96.1M memory peak. May 14 18:06:14.830032 containerd[1584]: time="2025-05-14T18:06:14.829972310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:14.830957 containerd[1584]: time="2025-05-14T18:06:14.830884015Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 18:06:14.831972 containerd[1584]: time="2025-05-14T18:06:14.831947924Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:14.834324 containerd[1584]: time="2025-05-14T18:06:14.834299511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:14.835143 containerd[1584]: time="2025-05-14T18:06:14.835108864Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.571182218s" May 14 18:06:14.835187 containerd[1584]: time="2025-05-14T18:06:14.835146486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 18:06:14.835603 containerd[1584]: time="2025-05-14T18:06:14.835565496Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 18:06:15.702040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044672595.mount: Deactivated successfully. May 14 18:06:16.331005 containerd[1584]: time="2025-05-14T18:06:16.330942375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:16.331653 containerd[1584]: time="2025-05-14T18:06:16.331588770Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 14 18:06:16.332718 containerd[1584]: time="2025-05-14T18:06:16.332682996Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:16.336187 containerd[1584]: time="2025-05-14T18:06:16.336129442Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.500519116s" May 14 18:06:16.336187 containerd[1584]: time="2025-05-14T18:06:16.336179740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 18:06:16.336578 containerd[1584]: time="2025-05-14T18:06:16.336441375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:16.337026 containerd[1584]: time="2025-05-14T18:06:16.336988203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:06:16.937071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983315761.mount: Deactivated successfully. May 14 18:06:17.838638 containerd[1584]: time="2025-05-14T18:06:17.838575163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:17.854977 containerd[1584]: time="2025-05-14T18:06:17.854919799Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 18:06:17.862579 containerd[1584]: time="2025-05-14T18:06:17.862535387Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:17.866440 containerd[1584]: time="2025-05-14T18:06:17.866394532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:17.867387 containerd[1584]: time="2025-05-14T18:06:17.867339001Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.530311595s" May 14 18:06:17.867387 containerd[1584]: time="2025-05-14T18:06:17.867373601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:06:17.869925 containerd[1584]: time="2025-05-14T18:06:17.869901560Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:06:18.365298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2486225391.mount: Deactivated successfully. May 14 18:06:18.370876 containerd[1584]: time="2025-05-14T18:06:18.370804347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:06:18.371539 containerd[1584]: time="2025-05-14T18:06:18.371487357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 18:06:18.372561 containerd[1584]: time="2025-05-14T18:06:18.372530090Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:06:18.374352 containerd[1584]: time="2025-05-14T18:06:18.374310613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:06:18.374909 containerd[1584]: time="2025-05-14T18:06:18.374877301Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 504.950728ms" May 14 18:06:18.374909 containerd[1584]: time="2025-05-14T18:06:18.374907897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 18:06:18.375339 containerd[1584]: time="2025-05-14T18:06:18.375313252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 18:06:18.895477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053823965.mount: Deactivated successfully. May 14 18:06:20.951026 containerd[1584]: time="2025-05-14T18:06:20.950960760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:20.951707 containerd[1584]: time="2025-05-14T18:06:20.951655961Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 14 18:06:20.952776 containerd[1584]: time="2025-05-14T18:06:20.952731963Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:20.955439 containerd[1584]: time="2025-05-14T18:06:20.955396713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:20.956267 containerd[1584]: time="2025-05-14T18:06:20.956232965Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.580892617s" May 14 18:06:20.956267 containerd[1584]: time="2025-05-14T18:06:20.956265518Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 18:06:23.450489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:06:23.450666 systemd[1]: kubelet.service: Consumed 191ms CPU time, 96.1M memory peak. May 14 18:06:23.452819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:06:23.476380 systemd[1]: Reload requested from client PID 2239 ('systemctl') (unit session-7.scope)... May 14 18:06:23.476397 systemd[1]: Reloading... May 14 18:06:23.554871 zram_generator::config[2282]: No configuration found. May 14 18:06:23.756223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:06:23.868577 systemd[1]: Reloading finished in 391 ms. May 14 18:06:23.929454 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:06:23.929549 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:06:23.929826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:06:23.929891 systemd[1]: kubelet.service: Consumed 125ms CPU time, 83.6M memory peak. May 14 18:06:23.931423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:06:24.088485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:06:24.092148 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:06:24.129583 kubelet[2330]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:06:24.129583 kubelet[2330]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:06:24.129583 kubelet[2330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:06:24.130027 kubelet[2330]: I0514 18:06:24.129651 2330 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:06:24.294269 kubelet[2330]: I0514 18:06:24.294211 2330 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:06:24.294269 kubelet[2330]: I0514 18:06:24.294244 2330 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:06:24.294509 kubelet[2330]: I0514 18:06:24.294482 2330 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:06:24.318509 kubelet[2330]: I0514 18:06:24.318466 2330 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:06:24.318795 kubelet[2330]: E0514 18:06:24.318742 2330 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 14 18:06:24.325061 kubelet[2330]: I0514 18:06:24.325027 2330 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:06:24.331820 kubelet[2330]: I0514 18:06:24.331792 2330 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:06:24.332660 kubelet[2330]: I0514 18:06:24.332631 2330 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:06:24.332806 kubelet[2330]: I0514 18:06:24.332768 2330 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:06:24.332980 kubelet[2330]: I0514 18:06:24.332792 2330 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:06:24.332980 kubelet[2330]: I0514 18:06:24.332968 2330 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:06:24.332980 kubelet[2330]: I0514 18:06:24.332976 2330 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:06:24.333119 kubelet[2330]: I0514 18:06:24.333085 2330 state_mem.go:36] "Initialized new in-memory state store" May 14 18:06:24.334355 kubelet[2330]: I0514 18:06:24.334327 2330 kubelet.go:408] "Attempting to sync node with API server" May 14 18:06:24.334355 kubelet[2330]: I0514 18:06:24.334345 2330 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:06:24.334412 kubelet[2330]: I0514 18:06:24.334382 2330 kubelet.go:314] "Adding apiserver pod source" May 14 18:06:24.334412 kubelet[2330]: I0514 18:06:24.334396 2330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:06:24.338484 kubelet[2330]: W0514 18:06:24.338439 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 14 18:06:24.338572 kubelet[2330]: E0514 18:06:24.338505 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 14 18:06:24.338572 kubelet[2330]: W0514 18:06:24.338512 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 14 18:06:24.338628 kubelet[2330]: E0514 18:06:24.338573 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 14 18:06:24.339991 kubelet[2330]: I0514 18:06:24.339964 2330 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:06:24.341471 kubelet[2330]: I0514 18:06:24.341449 2330 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:06:24.342121 kubelet[2330]: W0514 18:06:24.342108 2330 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:06:24.343128 kubelet[2330]: I0514 18:06:24.342937 2330 server.go:1269] "Started kubelet" May 14 18:06:24.343895 kubelet[2330]: I0514 18:06:24.343734 2330 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:06:24.344169 kubelet[2330]: I0514 18:06:24.344144 2330 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:06:24.344232 kubelet[2330]: I0514 18:06:24.344205 2330 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:06:24.344444 kubelet[2330]: I0514 18:06:24.344430 2330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:06:24.345269 kubelet[2330]: I0514 18:06:24.345244 2330 server.go:460] "Adding debug handlers to kubelet server" May 14 18:06:24.346062 kubelet[2330]: I0514 18:06:24.346042 2330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:06:24.346106 kubelet[2330]: I0514 18:06:24.346071 2330 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:06:24.346160 kubelet[2330]: I0514 18:06:24.346139 2330 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:06:24.346219 kubelet[2330]: I0514 18:06:24.346200 2330 reconciler.go:26] "Reconciler: start to sync state" May 14 18:06:24.346462 kubelet[2330]: W0514 18:06:24.346425 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 14 18:06:24.346505 kubelet[2330]: E0514 18:06:24.346463 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 14 18:06:24.346783 kubelet[2330]: E0514 18:06:24.346757 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:24.346854 kubelet[2330]: E0514 18:06:24.346813 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" May 14 18:06:24.352144 kubelet[2330]: E0514 18:06:24.352118 2330 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:06:24.352504 kubelet[2330]: I0514 18:06:24.352491 2330 factory.go:221] Registration of the containerd container factory successfully May 14 18:06:24.352610 kubelet[2330]: I0514 18:06:24.352600 2330 factory.go:221] Registration of the systemd container factory successfully May 14 18:06:24.352763 kubelet[2330]: E0514 18:06:24.349747 2330 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f76f727fcba62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 18:06:24.342915682 +0000 UTC m=+0.247455912,LastTimestamp:2025-05-14 18:06:24.342915682 +0000 UTC m=+0.247455912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 18:06:24.352911 kubelet[2330]: I0514 18:06:24.352893 2330 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:06:24.361788 kubelet[2330]: I0514 18:06:24.361738 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:06:24.362944 kubelet[2330]: I0514 18:06:24.362912 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:06:24.362986 kubelet[2330]: I0514 18:06:24.362956 2330 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:06:24.362986 kubelet[2330]: I0514 18:06:24.362973 2330 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:06:24.363034 kubelet[2330]: E0514 18:06:24.363018 2330 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:06:24.368166 kubelet[2330]: W0514 18:06:24.368105 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 14 18:06:24.368166 kubelet[2330]: E0514 18:06:24.368154 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 14 18:06:24.368247 kubelet[2330]: I0514 18:06:24.368222 2330 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:06:24.368247 kubelet[2330]: I0514 18:06:24.368233 2330 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:06:24.368247 kubelet[2330]: I0514 18:06:24.368247 2330 state_mem.go:36] "Initialized new in-memory state store" May 14 18:06:24.447606 kubelet[2330]: E0514 18:06:24.447579 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:24.463854 kubelet[2330]: E0514 18:06:24.463811 2330 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:06:24.547400 kubelet[2330]: E0514 18:06:24.547353 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" May 14 18:06:24.548388 kubelet[2330]: E0514 18:06:24.548359 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:24.648871 kubelet[2330]: E0514 18:06:24.648734 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:24.661067 kubelet[2330]: I0514 18:06:24.661022 2330 policy_none.go:49] "None policy: Start" May 14 18:06:24.661633 kubelet[2330]: I0514 18:06:24.661604 2330 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:06:24.661633 kubelet[2330]: I0514 18:06:24.661629 2330 state_mem.go:35] "Initializing new in-memory state store" May 14 18:06:24.664080 kubelet[2330]: E0514 18:06:24.664029 2330 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:06:24.668772 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:06:24.681710 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:06:24.685380 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:06:24.710701 kubelet[2330]: I0514 18:06:24.710675 2330 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:06:24.710915 kubelet[2330]: I0514 18:06:24.710882 2330 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:06:24.710947 kubelet[2330]: I0514 18:06:24.710902 2330 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:06:24.711858 kubelet[2330]: I0514 18:06:24.711316 2330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:06:24.714013 kubelet[2330]: E0514 18:06:24.713996 2330 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 18:06:24.812649 kubelet[2330]: I0514 18:06:24.812586 2330 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:06:24.813084 kubelet[2330]: E0514 18:06:24.813041 2330 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" May 14 18:06:24.948677 kubelet[2330]: E0514 18:06:24.948585 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" May 14 18:06:25.014831 kubelet[2330]: I0514 18:06:25.014771 2330 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:06:25.015202 kubelet[2330]: E0514 18:06:25.015166 2330 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" May 14 18:06:25.073477 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 18:06:25.094831 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 18:06:25.098256 systemd[1]: Created slice kubepods-burstable-pod116c342beca9c382cc3b7b5595d301d2.slice - libcontainer container kubepods-burstable-pod116c342beca9c382cc3b7b5595d301d2.slice. May 14 18:06:25.151356 kubelet[2330]: I0514 18:06:25.151309 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:25.151356 kubelet[2330]: I0514 18:06:25.151354 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:25.151728 kubelet[2330]: I0514 18:06:25.151376 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:06:25.151728 kubelet[2330]: I0514 18:06:25.151394 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/116c342beca9c382cc3b7b5595d301d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"116c342beca9c382cc3b7b5595d301d2\") " pod="kube-system/kube-apiserver-localhost" May 14 18:06:25.151728 kubelet[2330]: I0514 18:06:25.151410 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/116c342beca9c382cc3b7b5595d301d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"116c342beca9c382cc3b7b5595d301d2\") " pod="kube-system/kube-apiserver-localhost" May 14 18:06:25.151728 kubelet[2330]: I0514 18:06:25.151451 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:25.151728 kubelet[2330]: I0514 18:06:25.151489 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:25.151832 kubelet[2330]: I0514 18:06:25.151549 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:25.151832 kubelet[2330]: I0514 18:06:25.151587 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/116c342beca9c382cc3b7b5595d301d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"116c342beca9c382cc3b7b5595d301d2\") " pod="kube-system/kube-apiserver-localhost" May 14 18:06:25.342951 kubelet[2330]: W0514 18:06:25.342786 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 14 18:06:25.342951 kubelet[2330]: E0514 18:06:25.342878 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 14 18:06:25.391823 containerd[1584]: time="2025-05-14T18:06:25.391777955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 18:06:25.397368 containerd[1584]: time="2025-05-14T18:06:25.397332209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 18:06:25.400900 containerd[1584]: time="2025-05-14T18:06:25.400861618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:116c342beca9c382cc3b7b5595d301d2,Namespace:kube-system,Attempt:0,}" May 14 18:06:25.412352 kubelet[2330]: W0514 18:06:25.412276 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 14 18:06:25.412352 kubelet[2330]: E0514 18:06:25.412330 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 14 18:06:25.416723 kubelet[2330]: I0514 18:06:25.416698 2330 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:06:25.417162 kubelet[2330]: E0514 18:06:25.417111 2330 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" May 14 18:06:25.418881 containerd[1584]: time="2025-05-14T18:06:25.418816108Z" level=info msg="connecting to shim efecaf7f015d97c38b1336dbcd44f43595183f600be2f318768e5af2ce3f7983" address="unix:///run/containerd/s/b0d5665177298b57f67b4350c584e8c397ac84047007848b160cbd8693e1bb63" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:25.439003 containerd[1584]: time="2025-05-14T18:06:25.438951534Z" level=info msg="connecting to shim 08a132b94afc86bef76ce2677b9f20ba040b45f4cd7825c7d39969a9b0ccc3aa" address="unix:///run/containerd/s/9f684645ea5b80c405307e7f76fcb1b499271179894bdd07be1503f7ef15ce9a" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:25.439527 containerd[1584]: time="2025-05-14T18:06:25.439483373Z" level=info msg="connecting to shim c07f38679aa41594a56774575d07c42802f7d60e32801e4657146a4bcf52249a" address="unix:///run/containerd/s/4963662661f14f6a4649ed4697728affe75009e7319309f16d2ba3d1adcb860f" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:25.453988 systemd[1]: Started cri-containerd-efecaf7f015d97c38b1336dbcd44f43595183f600be2f318768e5af2ce3f7983.scope - libcontainer container efecaf7f015d97c38b1336dbcd44f43595183f600be2f318768e5af2ce3f7983. May 14 18:06:25.462872 systemd[1]: Started cri-containerd-08a132b94afc86bef76ce2677b9f20ba040b45f4cd7825c7d39969a9b0ccc3aa.scope - libcontainer container 08a132b94afc86bef76ce2677b9f20ba040b45f4cd7825c7d39969a9b0ccc3aa. May 14 18:06:25.469523 systemd[1]: Started cri-containerd-c07f38679aa41594a56774575d07c42802f7d60e32801e4657146a4bcf52249a.scope - libcontainer container c07f38679aa41594a56774575d07c42802f7d60e32801e4657146a4bcf52249a. May 14 18:06:25.502393 containerd[1584]: time="2025-05-14T18:06:25.502343496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"efecaf7f015d97c38b1336dbcd44f43595183f600be2f318768e5af2ce3f7983\"" May 14 18:06:25.506353 containerd[1584]: time="2025-05-14T18:06:25.506316822Z" level=info msg="CreateContainer within sandbox \"efecaf7f015d97c38b1336dbcd44f43595183f600be2f318768e5af2ce3f7983\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:06:25.508049 containerd[1584]: time="2025-05-14T18:06:25.508010265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:116c342beca9c382cc3b7b5595d301d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"08a132b94afc86bef76ce2677b9f20ba040b45f4cd7825c7d39969a9b0ccc3aa\"" May 14 18:06:25.511979 containerd[1584]: time="2025-05-14T18:06:25.511924456Z" level=info msg="CreateContainer within sandbox \"08a132b94afc86bef76ce2677b9f20ba040b45f4cd7825c7d39969a9b0ccc3aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:06:25.518889 containerd[1584]: time="2025-05-14T18:06:25.518824371Z" level=info msg="Container 49adfe14c53beb6044976ea4b18b8cbef086e378a9a27dc4a0cf80f68f9cfb78: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:25.521532 containerd[1584]: time="2025-05-14T18:06:25.521497893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c07f38679aa41594a56774575d07c42802f7d60e32801e4657146a4bcf52249a\"" May 14 18:06:25.523221 containerd[1584]: time="2025-05-14T18:06:25.523198760Z" level=info msg="CreateContainer within sandbox \"c07f38679aa41594a56774575d07c42802f7d60e32801e4657146a4bcf52249a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:06:25.525261 containerd[1584]: time="2025-05-14T18:06:25.524657818Z" level=info msg="Container 726c8e09c2d9d770ada73b5c4263483499a04a4aaddaf3d1826dc69b22788d1e: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:25.527873 containerd[1584]: time="2025-05-14T18:06:25.527826856Z" level=info msg="CreateContainer within sandbox \"efecaf7f015d97c38b1336dbcd44f43595183f600be2f318768e5af2ce3f7983\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49adfe14c53beb6044976ea4b18b8cbef086e378a9a27dc4a0cf80f68f9cfb78\"" May 14 18:06:25.528403 containerd[1584]: time="2025-05-14T18:06:25.528366980Z" level=info msg="StartContainer for \"49adfe14c53beb6044976ea4b18b8cbef086e378a9a27dc4a0cf80f68f9cfb78\"" May 14 18:06:25.529356 containerd[1584]: time="2025-05-14T18:06:25.529324956Z" level=info msg="connecting to shim 49adfe14c53beb6044976ea4b18b8cbef086e378a9a27dc4a0cf80f68f9cfb78" address="unix:///run/containerd/s/b0d5665177298b57f67b4350c584e8c397ac84047007848b160cbd8693e1bb63" protocol=ttrpc version=3 May 14 18:06:25.531990 containerd[1584]: time="2025-05-14T18:06:25.531954692Z" level=info msg="CreateContainer within sandbox \"08a132b94afc86bef76ce2677b9f20ba040b45f4cd7825c7d39969a9b0ccc3aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"726c8e09c2d9d770ada73b5c4263483499a04a4aaddaf3d1826dc69b22788d1e\"" May 14 18:06:25.532540 containerd[1584]: time="2025-05-14T18:06:25.532519720Z" level=info msg="StartContainer for \"726c8e09c2d9d770ada73b5c4263483499a04a4aaddaf3d1826dc69b22788d1e\"" May 14 18:06:25.533472 containerd[1584]: time="2025-05-14T18:06:25.533414249Z" level=info msg="connecting to shim 726c8e09c2d9d770ada73b5c4263483499a04a4aaddaf3d1826dc69b22788d1e" address="unix:///run/containerd/s/9f684645ea5b80c405307e7f76fcb1b499271179894bdd07be1503f7ef15ce9a" protocol=ttrpc version=3 May 14 18:06:25.539166 containerd[1584]: time="2025-05-14T18:06:25.539136810Z" level=info msg="Container dba24078e5ce28a915d083b9ae44ff12606f9fb26547aa0d4f7d27c17743ad26: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:25.546770 containerd[1584]: time="2025-05-14T18:06:25.546743015Z" level=info msg="CreateContainer within sandbox \"c07f38679aa41594a56774575d07c42802f7d60e32801e4657146a4bcf52249a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dba24078e5ce28a915d083b9ae44ff12606f9fb26547aa0d4f7d27c17743ad26\"" May 14 18:06:25.547373 containerd[1584]: time="2025-05-14T18:06:25.547151122Z" level=info msg="StartContainer for \"dba24078e5ce28a915d083b9ae44ff12606f9fb26547aa0d4f7d27c17743ad26\"" May 14 18:06:25.548088 containerd[1584]: time="2025-05-14T18:06:25.548060009Z" level=info msg="connecting to shim dba24078e5ce28a915d083b9ae44ff12606f9fb26547aa0d4f7d27c17743ad26" address="unix:///run/containerd/s/4963662661f14f6a4649ed4697728affe75009e7319309f16d2ba3d1adcb860f" protocol=ttrpc version=3 May 14 18:06:25.548971 systemd[1]: Started cri-containerd-49adfe14c53beb6044976ea4b18b8cbef086e378a9a27dc4a0cf80f68f9cfb78.scope - libcontainer container 49adfe14c53beb6044976ea4b18b8cbef086e378a9a27dc4a0cf80f68f9cfb78. May 14 18:06:25.552441 systemd[1]: Started cri-containerd-726c8e09c2d9d770ada73b5c4263483499a04a4aaddaf3d1826dc69b22788d1e.scope - libcontainer container 726c8e09c2d9d770ada73b5c4263483499a04a4aaddaf3d1826dc69b22788d1e. May 14 18:06:25.576008 systemd[1]: Started cri-containerd-dba24078e5ce28a915d083b9ae44ff12606f9fb26547aa0d4f7d27c17743ad26.scope - libcontainer container dba24078e5ce28a915d083b9ae44ff12606f9fb26547aa0d4f7d27c17743ad26. May 14 18:06:25.691747 containerd[1584]: time="2025-05-14T18:06:25.691698745Z" level=info msg="StartContainer for \"dba24078e5ce28a915d083b9ae44ff12606f9fb26547aa0d4f7d27c17743ad26\" returns successfully" May 14 18:06:25.692727 containerd[1584]: time="2025-05-14T18:06:25.692525275Z" level=info msg="StartContainer for \"49adfe14c53beb6044976ea4b18b8cbef086e378a9a27dc4a0cf80f68f9cfb78\" returns successfully" May 14 18:06:25.692805 containerd[1584]: time="2025-05-14T18:06:25.692607002Z" level=info msg="StartContainer for \"726c8e09c2d9d770ada73b5c4263483499a04a4aaddaf3d1826dc69b22788d1e\" returns successfully" May 14 18:06:26.219057 kubelet[2330]: I0514 18:06:26.218961 2330 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:06:26.596560 kubelet[2330]: E0514 18:06:26.596407 2330 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 18:06:26.693721 kubelet[2330]: I0514 18:06:26.693646 2330 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:06:26.693721 kubelet[2330]: E0514 18:06:26.693699 2330 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 18:06:26.701441 kubelet[2330]: E0514 18:06:26.701375 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:26.802495 kubelet[2330]: E0514 18:06:26.802452 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:26.903031 kubelet[2330]: E0514 18:06:26.902865 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.003414 kubelet[2330]: E0514 18:06:27.003377 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.103530 kubelet[2330]: E0514 18:06:27.103479 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.204244 kubelet[2330]: E0514 18:06:27.204199 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.304762 kubelet[2330]: E0514 18:06:27.304712 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.405150 kubelet[2330]: E0514 18:06:27.405102 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.505621 kubelet[2330]: E0514 18:06:27.505518 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.606129 kubelet[2330]: E0514 18:06:27.606077 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.707222 kubelet[2330]: E0514 18:06:27.707177 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.807922 kubelet[2330]: E0514 18:06:27.807811 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:27.908833 kubelet[2330]: E0514 18:06:27.908805 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:28.009345 kubelet[2330]: E0514 18:06:28.009290 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:28.109909 kubelet[2330]: E0514 18:06:28.109775 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:28.210309 kubelet[2330]: E0514 18:06:28.210265 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:28.310592 kubelet[2330]: E0514 18:06:28.310566 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:06:29.031172 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-7.scope)... May 14 18:06:29.031189 systemd[1]: Reloading... May 14 18:06:29.104872 zram_generator::config[2638]: No configuration found. May 14 18:06:29.203763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:06:29.331432 systemd[1]: Reloading finished in 299 ms. May 14 18:06:29.341765 kubelet[2330]: I0514 18:06:29.341736 2330 apiserver.go:52] "Watching apiserver" May 14 18:06:29.346858 kubelet[2330]: I0514 18:06:29.346811 2330 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:06:29.356619 kubelet[2330]: I0514 18:06:29.356577 2330 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:06:29.356724 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:06:29.378377 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:06:29.378655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:06:29.378708 systemd[1]: kubelet.service: Consumed 648ms CPU time, 117.8M memory peak. May 14 18:06:29.380626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:06:29.562758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:06:29.573178 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:06:29.625948 kubelet[2683]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:06:29.625948 kubelet[2683]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:06:29.625948 kubelet[2683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:06:29.626880 kubelet[2683]: I0514 18:06:29.626346 2683 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:06:29.632172 kubelet[2683]: I0514 18:06:29.632150 2683 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:06:29.632172 kubelet[2683]: I0514 18:06:29.632166 2683 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:06:29.632341 kubelet[2683]: I0514 18:06:29.632326 2683 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:06:29.634117 kubelet[2683]: I0514 18:06:29.633790 2683 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:06:29.636578 kubelet[2683]: I0514 18:06:29.636559 2683 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:06:29.640156 kubelet[2683]: I0514 18:06:29.640134 2683 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:06:29.644946 kubelet[2683]: I0514 18:06:29.644916 2683 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:06:29.645054 kubelet[2683]: I0514 18:06:29.645030 2683 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:06:29.645218 kubelet[2683]: I0514 18:06:29.645178 2683 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:06:29.645390 kubelet[2683]: I0514 18:06:29.645209 2683 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:06:29.645390 kubelet[2683]: I0514 18:06:29.645389 2683 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:06:29.645487 kubelet[2683]: I0514 18:06:29.645397 2683 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:06:29.645487 kubelet[2683]: I0514 18:06:29.645427 2683 state_mem.go:36] "Initialized new in-memory state store" May 14 18:06:29.645560 kubelet[2683]: I0514 18:06:29.645544 2683 kubelet.go:408] "Attempting to sync node with API server" May 14 18:06:29.645560 kubelet[2683]: I0514 18:06:29.645558 2683 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:06:29.645607 kubelet[2683]: I0514 18:06:29.645591 2683 kubelet.go:314] "Adding apiserver pod source" May 14 18:06:29.645607 kubelet[2683]: I0514 18:06:29.645605 2683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:06:29.646333 kubelet[2683]: I0514 18:06:29.646297 2683 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:06:29.646683 kubelet[2683]: I0514 18:06:29.646659 2683 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:06:29.648626 kubelet[2683]: I0514 18:06:29.647105 2683 server.go:1269] "Started kubelet" May 14 18:06:29.648626 kubelet[2683]: I0514 18:06:29.647378 2683 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:06:29.648626 kubelet[2683]: I0514 18:06:29.647489 2683 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:06:29.648626 kubelet[2683]: I0514 18:06:29.647745 2683 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:06:29.648626 kubelet[2683]: I0514 18:06:29.648236 2683 server.go:460] "Adding debug handlers to kubelet server" May 14 18:06:29.657131 kubelet[2683]: I0514 18:06:29.657088 2683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:06:29.658451 kubelet[2683]: I0514 18:06:29.657721 2683 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:06:29.658724 kubelet[2683]: I0514 18:06:29.658707 2683 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:06:29.658894 kubelet[2683]: I0514 18:06:29.658880 2683 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:06:29.659065 kubelet[2683]: I0514 18:06:29.659054 2683 reconciler.go:26] "Reconciler: start to sync state" May 14 18:06:29.660545 kubelet[2683]: E0514 18:06:29.660492 2683 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:06:29.661562 kubelet[2683]: I0514 18:06:29.661535 2683 factory.go:221] Registration of the containerd container factory successfully May 14 18:06:29.661562 kubelet[2683]: I0514 18:06:29.661559 2683 factory.go:221] Registration of the systemd container factory successfully May 14 18:06:29.661652 kubelet[2683]: I0514 18:06:29.661617 2683 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:06:29.668581 kubelet[2683]: I0514 18:06:29.668529 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:06:29.670649 kubelet[2683]: I0514 18:06:29.670546 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:06:29.670920 kubelet[2683]: I0514 18:06:29.670895 2683 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:06:29.670971 kubelet[2683]: I0514 18:06:29.670944 2683 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:06:29.671019 kubelet[2683]: E0514 18:06:29.670992 2683 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:06:29.699973 kubelet[2683]: I0514 18:06:29.699936 2683 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:06:29.699973 kubelet[2683]: I0514 18:06:29.699957 2683 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:06:29.699973 kubelet[2683]: I0514 18:06:29.699984 2683 state_mem.go:36] "Initialized new in-memory state store" May 14 18:06:29.700276 kubelet[2683]: I0514 18:06:29.700243 2683 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:06:29.700303 kubelet[2683]: I0514 18:06:29.700280 2683 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:06:29.700303 kubelet[2683]: I0514 18:06:29.700302 2683 policy_none.go:49] "None policy: Start" May 14 18:06:29.700991 kubelet[2683]: I0514 18:06:29.700896 2683 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:06:29.700991 kubelet[2683]: I0514 18:06:29.700921 2683 state_mem.go:35] "Initializing new in-memory state store" May 14 18:06:29.701161 kubelet[2683]: I0514 18:06:29.701079 2683 state_mem.go:75] "Updated machine memory state" May 14 18:06:29.705714 kubelet[2683]: I0514 18:06:29.705687 2683 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:06:29.705914 kubelet[2683]: I0514 18:06:29.705891 2683 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:06:29.705958 kubelet[2683]: I0514 18:06:29.705905 2683 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:06:29.706387 kubelet[2683]: I0514 18:06:29.706108 2683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:06:29.809386 kubelet[2683]: I0514 18:06:29.809355 2683 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:06:29.816384 kubelet[2683]: I0514 18:06:29.816282 2683 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 18:06:29.816384 kubelet[2683]: I0514 18:06:29.816385 2683 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:06:29.860227 kubelet[2683]: I0514 18:06:29.860194 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:29.860227 kubelet[2683]: I0514 18:06:29.860223 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:29.860383 kubelet[2683]: I0514 18:06:29.860244 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:06:29.860383 kubelet[2683]: I0514 18:06:29.860258 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:29.860383 kubelet[2683]: I0514 18:06:29.860271 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:29.860383 kubelet[2683]: I0514 18:06:29.860285 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:06:29.860383 kubelet[2683]: I0514 18:06:29.860300 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/116c342beca9c382cc3b7b5595d301d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"116c342beca9c382cc3b7b5595d301d2\") " pod="kube-system/kube-apiserver-localhost" May 14 18:06:29.860498 kubelet[2683]: I0514 18:06:29.860367 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/116c342beca9c382cc3b7b5595d301d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"116c342beca9c382cc3b7b5595d301d2\") " pod="kube-system/kube-apiserver-localhost" May 14 18:06:29.860498 kubelet[2683]: I0514 18:06:29.860419 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/116c342beca9c382cc3b7b5595d301d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"116c342beca9c382cc3b7b5595d301d2\") " pod="kube-system/kube-apiserver-localhost" May 14 18:06:30.646969 kubelet[2683]: I0514 18:06:30.646925 2683 apiserver.go:52] "Watching apiserver" May 14 18:06:30.659880 kubelet[2683]: I0514 18:06:30.659576 2683 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:06:30.700120 kubelet[2683]: E0514 18:06:30.698083 2683 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:06:30.752279 kubelet[2683]: I0514 18:06:30.752118 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.752088953 podStartE2EDuration="1.752088953s" podCreationTimestamp="2025-05-14 18:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:30.751302475 +0000 UTC m=+1.168339130" watchObservedRunningTime="2025-05-14 18:06:30.752088953 +0000 UTC m=+1.169125607" May 14 18:06:30.968563 kubelet[2683]: I0514 18:06:30.968476 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.968459383 podStartE2EDuration="1.968459383s" podCreationTimestamp="2025-05-14 18:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:30.968166118 +0000 UTC m=+1.385202772" watchObservedRunningTime="2025-05-14 18:06:30.968459383 +0000 UTC m=+1.385496027" May 14 18:06:30.995370 kubelet[2683]: I0514 18:06:30.995298 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.995278348 podStartE2EDuration="1.995278348s" podCreationTimestamp="2025-05-14 18:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:30.980729723 +0000 UTC m=+1.397766387" watchObservedRunningTime="2025-05-14 18:06:30.995278348 +0000 UTC m=+1.412315002" May 14 18:06:33.821219 kubelet[2683]: I0514 18:06:33.821166 2683 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:06:33.821697 kubelet[2683]: I0514 18:06:33.821688 2683 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:06:33.821727 containerd[1584]: time="2025-05-14T18:06:33.821538062Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:06:34.218604 sudo[1790]: pam_unix(sudo:session): session closed for user root May 14 18:06:34.219943 sshd[1789]: Connection closed by 10.0.0.1 port 38604 May 14 18:06:34.220302 sshd-session[1787]: pam_unix(sshd:session): session closed for user core May 14 18:06:34.222951 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:38604.service: Deactivated successfully. May 14 18:06:34.225024 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:06:34.225213 systemd[1]: session-7.scope: Consumed 4.243s CPU time, 224.1M memory peak. May 14 18:06:34.227079 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. May 14 18:06:34.228473 systemd-logind[1566]: Removed session 7. May 14 18:06:34.535538 systemd[1]: Created slice kubepods-besteffort-pod09475220_a8fd_49bd_b027_4841fef9b46b.slice - libcontainer container kubepods-besteffort-pod09475220_a8fd_49bd_b027_4841fef9b46b.slice. May 14 18:06:34.591409 kubelet[2683]: I0514 18:06:34.591375 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09475220-a8fd-49bd-b027-4841fef9b46b-xtables-lock\") pod \"kube-proxy-9csvq\" (UID: \"09475220-a8fd-49bd-b027-4841fef9b46b\") " pod="kube-system/kube-proxy-9csvq" May 14 18:06:34.591409 kubelet[2683]: I0514 18:06:34.591415 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09475220-a8fd-49bd-b027-4841fef9b46b-kube-proxy\") pod \"kube-proxy-9csvq\" (UID: \"09475220-a8fd-49bd-b027-4841fef9b46b\") " pod="kube-system/kube-proxy-9csvq" May 14 18:06:34.591571 kubelet[2683]: I0514 18:06:34.591439 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09475220-a8fd-49bd-b027-4841fef9b46b-lib-modules\") pod \"kube-proxy-9csvq\" (UID: \"09475220-a8fd-49bd-b027-4841fef9b46b\") " pod="kube-system/kube-proxy-9csvq" May 14 18:06:34.591571 kubelet[2683]: I0514 18:06:34.591460 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhbsq\" (UniqueName: \"kubernetes.io/projected/09475220-a8fd-49bd-b027-4841fef9b46b-kube-api-access-bhbsq\") pod \"kube-proxy-9csvq\" (UID: \"09475220-a8fd-49bd-b027-4841fef9b46b\") " pod="kube-system/kube-proxy-9csvq" May 14 18:06:34.815798 systemd[1]: Created slice kubepods-besteffort-podc6cd3004_c21d_4c2f_8174_c821d26a2d2b.slice - libcontainer container kubepods-besteffort-podc6cd3004_c21d_4c2f_8174_c821d26a2d2b.slice. May 14 18:06:34.848140 containerd[1584]: time="2025-05-14T18:06:34.848095385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9csvq,Uid:09475220-a8fd-49bd-b027-4841fef9b46b,Namespace:kube-system,Attempt:0,}" May 14 18:06:34.887756 containerd[1584]: time="2025-05-14T18:06:34.887708059Z" level=info msg="connecting to shim b08790ca43e47cf31559be5d4d18020a72c3c7d07cb1cae21558b5d5b44169d6" address="unix:///run/containerd/s/a3c606939e5954d1b18d3c95e542a097cbea579e1de6cc0b682bebab6da80e63" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:34.893471 kubelet[2683]: I0514 18:06:34.893431 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5fk\" (UniqueName: \"kubernetes.io/projected/c6cd3004-c21d-4c2f-8174-c821d26a2d2b-kube-api-access-6l5fk\") pod \"tigera-operator-6f6897fdc5-5hgw4\" (UID: \"c6cd3004-c21d-4c2f-8174-c821d26a2d2b\") " pod="tigera-operator/tigera-operator-6f6897fdc5-5hgw4" May 14 18:06:34.893471 kubelet[2683]: I0514 18:06:34.893469 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6cd3004-c21d-4c2f-8174-c821d26a2d2b-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-5hgw4\" (UID: \"c6cd3004-c21d-4c2f-8174-c821d26a2d2b\") " pod="tigera-operator/tigera-operator-6f6897fdc5-5hgw4" May 14 18:06:34.911968 systemd[1]: Started cri-containerd-b08790ca43e47cf31559be5d4d18020a72c3c7d07cb1cae21558b5d5b44169d6.scope - libcontainer container b08790ca43e47cf31559be5d4d18020a72c3c7d07cb1cae21558b5d5b44169d6. May 14 18:06:34.937384 containerd[1584]: time="2025-05-14T18:06:34.937337326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9csvq,Uid:09475220-a8fd-49bd-b027-4841fef9b46b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b08790ca43e47cf31559be5d4d18020a72c3c7d07cb1cae21558b5d5b44169d6\"" May 14 18:06:34.939875 containerd[1584]: time="2025-05-14T18:06:34.939822698Z" level=info msg="CreateContainer within sandbox \"b08790ca43e47cf31559be5d4d18020a72c3c7d07cb1cae21558b5d5b44169d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:06:34.950862 containerd[1584]: time="2025-05-14T18:06:34.950600799Z" level=info msg="Container 3c215728fff434d8c68e62edf33a77d8a6b180ab7657928f76cf4972cbf1d40c: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:34.955589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696448441.mount: Deactivated successfully. May 14 18:06:34.961895 containerd[1584]: time="2025-05-14T18:06:34.961828768Z" level=info msg="CreateContainer within sandbox \"b08790ca43e47cf31559be5d4d18020a72c3c7d07cb1cae21558b5d5b44169d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c215728fff434d8c68e62edf33a77d8a6b180ab7657928f76cf4972cbf1d40c\"" May 14 18:06:34.962538 containerd[1584]: time="2025-05-14T18:06:34.962501192Z" level=info msg="StartContainer for \"3c215728fff434d8c68e62edf33a77d8a6b180ab7657928f76cf4972cbf1d40c\"" May 14 18:06:34.964344 containerd[1584]: time="2025-05-14T18:06:34.964312007Z" level=info msg="connecting to shim 3c215728fff434d8c68e62edf33a77d8a6b180ab7657928f76cf4972cbf1d40c" address="unix:///run/containerd/s/a3c606939e5954d1b18d3c95e542a097cbea579e1de6cc0b682bebab6da80e63" protocol=ttrpc version=3 May 14 18:06:34.991999 systemd[1]: Started cri-containerd-3c215728fff434d8c68e62edf33a77d8a6b180ab7657928f76cf4972cbf1d40c.scope - libcontainer container 3c215728fff434d8c68e62edf33a77d8a6b180ab7657928f76cf4972cbf1d40c. May 14 18:06:35.030185 containerd[1584]: time="2025-05-14T18:06:35.030143999Z" level=info msg="StartContainer for \"3c215728fff434d8c68e62edf33a77d8a6b180ab7657928f76cf4972cbf1d40c\" returns successfully" May 14 18:06:35.122458 containerd[1584]: time="2025-05-14T18:06:35.122332790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-5hgw4,Uid:c6cd3004-c21d-4c2f-8174-c821d26a2d2b,Namespace:tigera-operator,Attempt:0,}" May 14 18:06:35.140325 containerd[1584]: time="2025-05-14T18:06:35.140283563Z" level=info msg="connecting to shim 3dadd4c91f59c9e64c8eb6d1bf1fbd6b72ef4752d24d5e53ae8e634f0c9bd37a" address="unix:///run/containerd/s/b2bf8163ce0945c1aedfe0fe52b1ab683bbf581ef8d0f3b46c7dbc76792a4440" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:35.164996 systemd[1]: Started cri-containerd-3dadd4c91f59c9e64c8eb6d1bf1fbd6b72ef4752d24d5e53ae8e634f0c9bd37a.scope - libcontainer container 3dadd4c91f59c9e64c8eb6d1bf1fbd6b72ef4752d24d5e53ae8e634f0c9bd37a. May 14 18:06:35.206957 containerd[1584]: time="2025-05-14T18:06:35.206910560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-5hgw4,Uid:c6cd3004-c21d-4c2f-8174-c821d26a2d2b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3dadd4c91f59c9e64c8eb6d1bf1fbd6b72ef4752d24d5e53ae8e634f0c9bd37a\"" May 14 18:06:35.208885 containerd[1584]: time="2025-05-14T18:06:35.208864459Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 18:06:35.708139 kubelet[2683]: I0514 18:06:35.708088 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9csvq" podStartSLOduration=1.708073514 podStartE2EDuration="1.708073514s" podCreationTimestamp="2025-05-14 18:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:35.707863203 +0000 UTC m=+6.124899857" watchObservedRunningTime="2025-05-14 18:06:35.708073514 +0000 UTC m=+6.125110168" May 14 18:06:36.929111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447213860.mount: Deactivated successfully. May 14 18:06:37.206561 containerd[1584]: time="2025-05-14T18:06:37.206441088Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:37.207150 containerd[1584]: time="2025-05-14T18:06:37.207127145Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 18:06:37.208278 containerd[1584]: time="2025-05-14T18:06:37.208211481Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:37.210109 containerd[1584]: time="2025-05-14T18:06:37.210072202Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:37.210614 containerd[1584]: time="2025-05-14T18:06:37.210545066Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.001654716s" May 14 18:06:37.210614 containerd[1584]: time="2025-05-14T18:06:37.210608327Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 18:06:37.212366 containerd[1584]: time="2025-05-14T18:06:37.212329139Z" level=info msg="CreateContainer within sandbox \"3dadd4c91f59c9e64c8eb6d1bf1fbd6b72ef4752d24d5e53ae8e634f0c9bd37a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 18:06:37.218416 containerd[1584]: time="2025-05-14T18:06:37.218381019Z" level=info msg="Container db190d7b099cca778debfd5dbe9f3bb6368b888100a4b1c15487e06d99717cdf: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:37.221782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016839686.mount: Deactivated successfully. May 14 18:06:37.225257 containerd[1584]: time="2025-05-14T18:06:37.225219385Z" level=info msg="CreateContainer within sandbox \"3dadd4c91f59c9e64c8eb6d1bf1fbd6b72ef4752d24d5e53ae8e634f0c9bd37a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"db190d7b099cca778debfd5dbe9f3bb6368b888100a4b1c15487e06d99717cdf\"" May 14 18:06:37.225591 containerd[1584]: time="2025-05-14T18:06:37.225568692Z" level=info msg="StartContainer for \"db190d7b099cca778debfd5dbe9f3bb6368b888100a4b1c15487e06d99717cdf\"" May 14 18:06:37.226321 containerd[1584]: time="2025-05-14T18:06:37.226286921Z" level=info msg="connecting to shim db190d7b099cca778debfd5dbe9f3bb6368b888100a4b1c15487e06d99717cdf" address="unix:///run/containerd/s/b2bf8163ce0945c1aedfe0fe52b1ab683bbf581ef8d0f3b46c7dbc76792a4440" protocol=ttrpc version=3 May 14 18:06:37.274977 systemd[1]: Started cri-containerd-db190d7b099cca778debfd5dbe9f3bb6368b888100a4b1c15487e06d99717cdf.scope - libcontainer container db190d7b099cca778debfd5dbe9f3bb6368b888100a4b1c15487e06d99717cdf. May 14 18:06:37.301555 containerd[1584]: time="2025-05-14T18:06:37.301520718Z" level=info msg="StartContainer for \"db190d7b099cca778debfd5dbe9f3bb6368b888100a4b1c15487e06d99717cdf\" returns successfully" May 14 18:06:40.147694 kubelet[2683]: I0514 18:06:40.147644 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-5hgw4" podStartSLOduration=4.144409642 podStartE2EDuration="6.147628368s" podCreationTimestamp="2025-05-14 18:06:34 +0000 UTC" firstStartedPulling="2025-05-14 18:06:35.207918838 +0000 UTC m=+5.624955492" lastFinishedPulling="2025-05-14 18:06:37.211137564 +0000 UTC m=+7.628174218" observedRunningTime="2025-05-14 18:06:37.762279201 +0000 UTC m=+8.179315855" watchObservedRunningTime="2025-05-14 18:06:40.147628368 +0000 UTC m=+10.564665023" May 14 18:06:40.162755 systemd[1]: Created slice kubepods-besteffort-podedf65c3b_b252_45e9_b86d_54069cd5bedf.slice - libcontainer container kubepods-besteffort-podedf65c3b_b252_45e9_b86d_54069cd5bedf.slice. May 14 18:06:40.169868 systemd[1]: Created slice kubepods-besteffort-podf2cda43a_504d_4acb_a376_f5e3a9d76bbf.slice - libcontainer container kubepods-besteffort-podf2cda43a_504d_4acb_a376_f5e3a9d76bbf.slice. May 14 18:06:40.227160 kubelet[2683]: I0514 18:06:40.227069 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-var-run-calico\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227160 kubelet[2683]: I0514 18:06:40.227120 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-flexvol-driver-host\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227160 kubelet[2683]: I0514 18:06:40.227147 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xbp\" (UniqueName: \"kubernetes.io/projected/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-kube-api-access-45xbp\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227160 kubelet[2683]: I0514 18:06:40.227168 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-policysync\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227414 kubelet[2683]: I0514 18:06:40.227187 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-tigera-ca-bundle\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227414 kubelet[2683]: I0514 18:06:40.227225 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-cni-net-dir\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227414 kubelet[2683]: I0514 18:06:40.227247 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxwr\" (UniqueName: \"kubernetes.io/projected/edf65c3b-b252-45e9-b86d-54069cd5bedf-kube-api-access-sjxwr\") pod \"calico-typha-6c7bc698f5-2s6s8\" (UID: \"edf65c3b-b252-45e9-b86d-54069cd5bedf\") " pod="calico-system/calico-typha-6c7bc698f5-2s6s8" May 14 18:06:40.227414 kubelet[2683]: I0514 18:06:40.227271 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-lib-modules\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227414 kubelet[2683]: I0514 18:06:40.227289 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/edf65c3b-b252-45e9-b86d-54069cd5bedf-typha-certs\") pod \"calico-typha-6c7bc698f5-2s6s8\" (UID: \"edf65c3b-b252-45e9-b86d-54069cd5bedf\") " pod="calico-system/calico-typha-6c7bc698f5-2s6s8" May 14 18:06:40.227569 kubelet[2683]: I0514 18:06:40.227304 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-xtables-lock\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227569 kubelet[2683]: I0514 18:06:40.227320 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-cni-bin-dir\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227569 kubelet[2683]: I0514 18:06:40.227337 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-cni-log-dir\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227569 kubelet[2683]: I0514 18:06:40.227357 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edf65c3b-b252-45e9-b86d-54069cd5bedf-tigera-ca-bundle\") pod \"calico-typha-6c7bc698f5-2s6s8\" (UID: \"edf65c3b-b252-45e9-b86d-54069cd5bedf\") " pod="calico-system/calico-typha-6c7bc698f5-2s6s8" May 14 18:06:40.227569 kubelet[2683]: I0514 18:06:40.227376 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-node-certs\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.227798 kubelet[2683]: I0514 18:06:40.227397 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2cda43a-504d-4acb-a376-f5e3a9d76bbf-var-lib-calico\") pod \"calico-node-4xpsz\" (UID: \"f2cda43a-504d-4acb-a376-f5e3a9d76bbf\") " pod="calico-system/calico-node-4xpsz" May 14 18:06:40.230457 kubelet[2683]: E0514 18:06:40.230259 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9xcz" podUID="d88c5e05-4528-4641-b8cd-52f4b84e6088" May 14 18:06:40.328244 kubelet[2683]: I0514 18:06:40.328184 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d88c5e05-4528-4641-b8cd-52f4b84e6088-varrun\") pod \"csi-node-driver-l9xcz\" (UID: \"d88c5e05-4528-4641-b8cd-52f4b84e6088\") " pod="calico-system/csi-node-driver-l9xcz" May 14 18:06:40.328244 kubelet[2683]: I0514 18:06:40.328224 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d88c5e05-4528-4641-b8cd-52f4b84e6088-kubelet-dir\") pod \"csi-node-driver-l9xcz\" (UID: \"d88c5e05-4528-4641-b8cd-52f4b84e6088\") " pod="calico-system/csi-node-driver-l9xcz" May 14 18:06:40.328408 kubelet[2683]: I0514 18:06:40.328258 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rkd\" (UniqueName: \"kubernetes.io/projected/d88c5e05-4528-4641-b8cd-52f4b84e6088-kube-api-access-28rkd\") pod \"csi-node-driver-l9xcz\" (UID: \"d88c5e05-4528-4641-b8cd-52f4b84e6088\") " pod="calico-system/csi-node-driver-l9xcz" May 14 18:06:40.328408 kubelet[2683]: I0514 18:06:40.328290 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d88c5e05-4528-4641-b8cd-52f4b84e6088-registration-dir\") pod \"csi-node-driver-l9xcz\" (UID: \"d88c5e05-4528-4641-b8cd-52f4b84e6088\") " pod="calico-system/csi-node-driver-l9xcz" May 14 18:06:40.328408 kubelet[2683]: I0514 18:06:40.328338 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d88c5e05-4528-4641-b8cd-52f4b84e6088-socket-dir\") pod \"csi-node-driver-l9xcz\" (UID: \"d88c5e05-4528-4641-b8cd-52f4b84e6088\") " pod="calico-system/csi-node-driver-l9xcz" May 14 18:06:40.329824 kubelet[2683]: E0514 18:06:40.329697 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.329824 kubelet[2683]: W0514 18:06:40.329716 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.329824 kubelet[2683]: E0514 18:06:40.329737 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.331397 kubelet[2683]: E0514 18:06:40.331109 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.331488 kubelet[2683]: W0514 18:06:40.331471 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.331619 kubelet[2683]: E0514 18:06:40.331606 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.333868 kubelet[2683]: E0514 18:06:40.333057 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.333868 kubelet[2683]: W0514 18:06:40.333076 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.333868 kubelet[2683]: E0514 18:06:40.333241 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.333868 kubelet[2683]: E0514 18:06:40.333314 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.333868 kubelet[2683]: W0514 18:06:40.333320 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.333868 kubelet[2683]: E0514 18:06:40.333394 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.337822 kubelet[2683]: E0514 18:06:40.336018 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.337822 kubelet[2683]: W0514 18:06:40.336136 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.337822 kubelet[2683]: E0514 18:06:40.336940 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.337822 kubelet[2683]: W0514 18:06:40.336948 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.337822 kubelet[2683]: E0514 18:06:40.337490 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.337822 kubelet[2683]: W0514 18:06:40.337498 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.338161 kubelet[2683]: E0514 18:06:40.338091 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.338161 kubelet[2683]: E0514 18:06:40.338129 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.338161 kubelet[2683]: E0514 18:06:40.338150 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.338567 kubelet[2683]: E0514 18:06:40.338545 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.338567 kubelet[2683]: W0514 18:06:40.338561 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.339883 kubelet[2683]: E0514 18:06:40.338867 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.339883 kubelet[2683]: W0514 18:06:40.338887 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.339883 kubelet[2683]: E0514 18:06:40.339051 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.339883 kubelet[2683]: E0514 18:06:40.339109 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.339883 kubelet[2683]: W0514 18:06:40.339238 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.339883 kubelet[2683]: E0514 18:06:40.339250 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.339883 kubelet[2683]: E0514 18:06:40.339200 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.340387 kubelet[2683]: E0514 18:06:40.340354 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.340387 kubelet[2683]: W0514 18:06:40.340368 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.340387 kubelet[2683]: E0514 18:06:40.340378 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.341255 kubelet[2683]: E0514 18:06:40.341233 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.341255 kubelet[2683]: W0514 18:06:40.341248 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.341637 kubelet[2683]: E0514 18:06:40.341603 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.342381 kubelet[2683]: E0514 18:06:40.342252 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.342553 kubelet[2683]: W0514 18:06:40.342473 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.342553 kubelet[2683]: E0514 18:06:40.342487 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.342967 kubelet[2683]: E0514 18:06:40.342922 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.343011 kubelet[2683]: W0514 18:06:40.342968 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.343011 kubelet[2683]: E0514 18:06:40.342983 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.343212 kubelet[2683]: E0514 18:06:40.343180 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.343411 kubelet[2683]: W0514 18:06:40.343268 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.343411 kubelet[2683]: E0514 18:06:40.343295 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.344141 kubelet[2683]: E0514 18:06:40.344042 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.344141 kubelet[2683]: W0514 18:06:40.344061 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.345629 kubelet[2683]: E0514 18:06:40.345610 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.345771 kubelet[2683]: E0514 18:06:40.345718 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.345771 kubelet[2683]: W0514 18:06:40.345766 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.345874 kubelet[2683]: E0514 18:06:40.345777 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.429691 kubelet[2683]: E0514 18:06:40.429659 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.429691 kubelet[2683]: W0514 18:06:40.429677 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.429691 kubelet[2683]: E0514 18:06:40.429695 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.429951 kubelet[2683]: E0514 18:06:40.429910 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.429951 kubelet[2683]: W0514 18:06:40.429917 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.429951 kubelet[2683]: E0514 18:06:40.429925 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.430189 kubelet[2683]: E0514 18:06:40.430130 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.430189 kubelet[2683]: W0514 18:06:40.430147 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.430311 kubelet[2683]: E0514 18:06:40.430223 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.430417 kubelet[2683]: E0514 18:06:40.430400 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.430417 kubelet[2683]: W0514 18:06:40.430412 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.430478 kubelet[2683]: E0514 18:06:40.430427 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.430584 kubelet[2683]: E0514 18:06:40.430566 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.430584 kubelet[2683]: W0514 18:06:40.430576 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.430656 kubelet[2683]: E0514 18:06:40.430593 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.430755 kubelet[2683]: E0514 18:06:40.430739 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.430755 kubelet[2683]: W0514 18:06:40.430752 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.430892 kubelet[2683]: E0514 18:06:40.430765 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.430939 kubelet[2683]: E0514 18:06:40.430924 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.430939 kubelet[2683]: W0514 18:06:40.430931 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.431022 kubelet[2683]: E0514 18:06:40.430948 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.431115 kubelet[2683]: E0514 18:06:40.431099 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.431115 kubelet[2683]: W0514 18:06:40.431112 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.431164 kubelet[2683]: E0514 18:06:40.431125 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.431607 kubelet[2683]: E0514 18:06:40.431337 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.431607 kubelet[2683]: W0514 18:06:40.431366 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.431607 kubelet[2683]: E0514 18:06:40.431400 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.431756 kubelet[2683]: E0514 18:06:40.431731 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.431756 kubelet[2683]: W0514 18:06:40.431743 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.431756 kubelet[2683]: E0514 18:06:40.431758 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.432000 kubelet[2683]: E0514 18:06:40.431978 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.432000 kubelet[2683]: W0514 18:06:40.431989 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.432000 kubelet[2683]: E0514 18:06:40.432001 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.432183 kubelet[2683]: E0514 18:06:40.432163 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.432183 kubelet[2683]: W0514 18:06:40.432173 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.432234 kubelet[2683]: E0514 18:06:40.432186 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.432482 kubelet[2683]: E0514 18:06:40.432454 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.432512 kubelet[2683]: W0514 18:06:40.432482 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.432542 kubelet[2683]: E0514 18:06:40.432515 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.432707 kubelet[2683]: E0514 18:06:40.432692 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.432707 kubelet[2683]: W0514 18:06:40.432704 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.432760 kubelet[2683]: E0514 18:06:40.432721 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.432945 kubelet[2683]: E0514 18:06:40.432928 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.432945 kubelet[2683]: W0514 18:06:40.432941 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.433000 kubelet[2683]: E0514 18:06:40.432958 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.433126 kubelet[2683]: E0514 18:06:40.433112 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.433126 kubelet[2683]: W0514 18:06:40.433124 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.433176 kubelet[2683]: E0514 18:06:40.433137 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.433326 kubelet[2683]: E0514 18:06:40.433313 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.433326 kubelet[2683]: W0514 18:06:40.433323 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.433371 kubelet[2683]: E0514 18:06:40.433350 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.433517 kubelet[2683]: E0514 18:06:40.433506 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.433517 kubelet[2683]: W0514 18:06:40.433515 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.433562 kubelet[2683]: E0514 18:06:40.433541 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.433678 kubelet[2683]: E0514 18:06:40.433667 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.433678 kubelet[2683]: W0514 18:06:40.433677 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.433722 kubelet[2683]: E0514 18:06:40.433702 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.433876 kubelet[2683]: E0514 18:06:40.433858 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.433876 kubelet[2683]: W0514 18:06:40.433867 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.433947 kubelet[2683]: E0514 18:06:40.433886 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.434063 kubelet[2683]: E0514 18:06:40.434051 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.434063 kubelet[2683]: W0514 18:06:40.434059 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.434113 kubelet[2683]: E0514 18:06:40.434070 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.434236 kubelet[2683]: E0514 18:06:40.434224 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.434236 kubelet[2683]: W0514 18:06:40.434232 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.434280 kubelet[2683]: E0514 18:06:40.434244 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.434403 kubelet[2683]: E0514 18:06:40.434393 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.434403 kubelet[2683]: W0514 18:06:40.434401 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.434443 kubelet[2683]: E0514 18:06:40.434408 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.434555 kubelet[2683]: E0514 18:06:40.434545 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.434555 kubelet[2683]: W0514 18:06:40.434552 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.434593 kubelet[2683]: E0514 18:06:40.434559 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.434724 kubelet[2683]: E0514 18:06:40.434713 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.434724 kubelet[2683]: W0514 18:06:40.434721 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.434765 kubelet[2683]: E0514 18:06:40.434728 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.442072 kubelet[2683]: E0514 18:06:40.442055 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:40.442072 kubelet[2683]: W0514 18:06:40.442069 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:40.442175 kubelet[2683]: E0514 18:06:40.442080 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:40.477285 containerd[1584]: time="2025-05-14T18:06:40.477213176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c7bc698f5-2s6s8,Uid:edf65c3b-b252-45e9-b86d-54069cd5bedf,Namespace:calico-system,Attempt:0,}" May 14 18:06:40.477633 containerd[1584]: time="2025-05-14T18:06:40.477285747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4xpsz,Uid:f2cda43a-504d-4acb-a376-f5e3a9d76bbf,Namespace:calico-system,Attempt:0,}" May 14 18:06:40.512231 containerd[1584]: time="2025-05-14T18:06:40.512183069Z" level=info msg="connecting to shim 3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe" address="unix:///run/containerd/s/f4fdce3e2d36379d38a670f23e54376fe5aac994e242560464a869d6d58ef7b3" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:40.513862 containerd[1584]: time="2025-05-14T18:06:40.513517636Z" level=info msg="connecting to shim e86fecce085a73951c4f749e66a573bd2dd737c7e7112b2dc23a3ba4ae3482a1" address="unix:///run/containerd/s/77d8169f3902b9f22e8a78ea4897e3e69c51b681845d4599cee0e3ea26e33600" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:40.548052 systemd[1]: Started cri-containerd-3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe.scope - libcontainer container 3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe. May 14 18:06:40.549989 systemd[1]: Started cri-containerd-e86fecce085a73951c4f749e66a573bd2dd737c7e7112b2dc23a3ba4ae3482a1.scope - libcontainer container e86fecce085a73951c4f749e66a573bd2dd737c7e7112b2dc23a3ba4ae3482a1. May 14 18:06:40.658698 containerd[1584]: time="2025-05-14T18:06:40.658652401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4xpsz,Uid:f2cda43a-504d-4acb-a376-f5e3a9d76bbf,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe\"" May 14 18:06:40.660414 containerd[1584]: time="2025-05-14T18:06:40.660373213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 18:06:40.661926 containerd[1584]: time="2025-05-14T18:06:40.661899796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c7bc698f5-2s6s8,Uid:edf65c3b-b252-45e9-b86d-54069cd5bedf,Namespace:calico-system,Attempt:0,} returns sandbox id \"e86fecce085a73951c4f749e66a573bd2dd737c7e7112b2dc23a3ba4ae3482a1\"" May 14 18:06:42.125328 containerd[1584]: time="2025-05-14T18:06:42.125264767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:42.126108 containerd[1584]: time="2025-05-14T18:06:42.126077140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 18:06:42.127202 containerd[1584]: time="2025-05-14T18:06:42.127140485Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:42.129134 containerd[1584]: time="2025-05-14T18:06:42.129084902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:42.129859 containerd[1584]: time="2025-05-14T18:06:42.129740810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.469324366s" May 14 18:06:42.129859 containerd[1584]: time="2025-05-14T18:06:42.129774829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 18:06:42.131281 containerd[1584]: time="2025-05-14T18:06:42.131258631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 18:06:42.131853 containerd[1584]: time="2025-05-14T18:06:42.131825008Z" level=info msg="CreateContainer within sandbox \"3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:06:42.140463 containerd[1584]: time="2025-05-14T18:06:42.140406908Z" level=info msg="Container 488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:42.148637 containerd[1584]: time="2025-05-14T18:06:42.148597503Z" level=info msg="CreateContainer within sandbox \"3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329\"" May 14 18:06:42.149056 containerd[1584]: time="2025-05-14T18:06:42.149018660Z" level=info msg="StartContainer for \"488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329\"" May 14 18:06:42.150482 containerd[1584]: time="2025-05-14T18:06:42.150447363Z" level=info msg="connecting to shim 488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329" address="unix:///run/containerd/s/f4fdce3e2d36379d38a670f23e54376fe5aac994e242560464a869d6d58ef7b3" protocol=ttrpc version=3 May 14 18:06:42.173009 systemd[1]: Started cri-containerd-488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329.scope - libcontainer container 488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329. May 14 18:06:42.224242 systemd[1]: cri-containerd-488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329.scope: Deactivated successfully. May 14 18:06:42.226853 containerd[1584]: time="2025-05-14T18:06:42.226813732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329\" id:\"488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329\" pid:3234 exited_at:{seconds:1747246002 nanos:226350443}" May 14 18:06:42.236957 containerd[1584]: time="2025-05-14T18:06:42.236914073Z" level=info msg="received exit event container_id:\"488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329\" id:\"488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329\" pid:3234 exited_at:{seconds:1747246002 nanos:226350443}" May 14 18:06:42.245310 containerd[1584]: time="2025-05-14T18:06:42.245267923Z" level=info msg="StartContainer for \"488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329\" returns successfully" May 14 18:06:42.257198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-488e3b459c4af154b9410f25fd2b85740ed98d35ee5f7e8d71073ef369338329-rootfs.mount: Deactivated successfully. May 14 18:06:42.671856 kubelet[2683]: E0514 18:06:42.671804 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9xcz" podUID="d88c5e05-4528-4641-b8cd-52f4b84e6088" May 14 18:06:44.085299 containerd[1584]: time="2025-05-14T18:06:44.085233708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:44.086009 containerd[1584]: time="2025-05-14T18:06:44.085962167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 18:06:44.087158 containerd[1584]: time="2025-05-14T18:06:44.087115473Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:44.088995 containerd[1584]: time="2025-05-14T18:06:44.088955836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:44.089517 containerd[1584]: time="2025-05-14T18:06:44.089448002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.958161151s" May 14 18:06:44.089517 containerd[1584]: time="2025-05-14T18:06:44.089506239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 18:06:44.090572 containerd[1584]: time="2025-05-14T18:06:44.090538123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 18:06:44.098564 containerd[1584]: time="2025-05-14T18:06:44.098513985Z" level=info msg="CreateContainer within sandbox \"e86fecce085a73951c4f749e66a573bd2dd737c7e7112b2dc23a3ba4ae3482a1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 18:06:44.107779 containerd[1584]: time="2025-05-14T18:06:44.107728990Z" level=info msg="Container e37df168697e66f16bcb0dd9e13ab60e2ba4a14624ae2dcbed44acfcb510a100: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:44.117051 containerd[1584]: time="2025-05-14T18:06:44.116998688Z" level=info msg="CreateContainer within sandbox \"e86fecce085a73951c4f749e66a573bd2dd737c7e7112b2dc23a3ba4ae3482a1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e37df168697e66f16bcb0dd9e13ab60e2ba4a14624ae2dcbed44acfcb510a100\"" May 14 18:06:44.117597 containerd[1584]: time="2025-05-14T18:06:44.117563243Z" level=info msg="StartContainer for \"e37df168697e66f16bcb0dd9e13ab60e2ba4a14624ae2dcbed44acfcb510a100\"" May 14 18:06:44.119244 containerd[1584]: time="2025-05-14T18:06:44.119178560Z" level=info msg="connecting to shim e37df168697e66f16bcb0dd9e13ab60e2ba4a14624ae2dcbed44acfcb510a100" address="unix:///run/containerd/s/77d8169f3902b9f22e8a78ea4897e3e69c51b681845d4599cee0e3ea26e33600" protocol=ttrpc version=3 May 14 18:06:44.146030 systemd[1]: Started cri-containerd-e37df168697e66f16bcb0dd9e13ab60e2ba4a14624ae2dcbed44acfcb510a100.scope - libcontainer container e37df168697e66f16bcb0dd9e13ab60e2ba4a14624ae2dcbed44acfcb510a100. May 14 18:06:44.194320 containerd[1584]: time="2025-05-14T18:06:44.194274755Z" level=info msg="StartContainer for \"e37df168697e66f16bcb0dd9e13ab60e2ba4a14624ae2dcbed44acfcb510a100\" returns successfully" May 14 18:06:44.671468 kubelet[2683]: E0514 18:06:44.671402 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9xcz" podUID="d88c5e05-4528-4641-b8cd-52f4b84e6088" May 14 18:06:45.719739 kubelet[2683]: I0514 18:06:45.719707 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:46.672260 kubelet[2683]: E0514 18:06:46.672203 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9xcz" podUID="d88c5e05-4528-4641-b8cd-52f4b84e6088" May 14 18:06:47.781963 update_engine[1571]: I20250514 18:06:47.781884 1571 update_attempter.cc:509] Updating boot flags... May 14 18:06:48.672521 kubelet[2683]: E0514 18:06:48.672446 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9xcz" podUID="d88c5e05-4528-4641-b8cd-52f4b84e6088" May 14 18:06:50.125120 containerd[1584]: time="2025-05-14T18:06:50.125072548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:50.125971 containerd[1584]: time="2025-05-14T18:06:50.125935174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 18:06:50.129938 containerd[1584]: time="2025-05-14T18:06:50.129905738Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:50.132105 containerd[1584]: time="2025-05-14T18:06:50.132033221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:50.132760 containerd[1584]: time="2025-05-14T18:06:50.132725030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.042153194s" May 14 18:06:50.132804 containerd[1584]: time="2025-05-14T18:06:50.132759736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 18:06:50.134623 containerd[1584]: time="2025-05-14T18:06:50.134590929Z" level=info msg="CreateContainer within sandbox \"3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:06:50.143665 containerd[1584]: time="2025-05-14T18:06:50.143615350Z" level=info msg="Container 294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:50.153753 containerd[1584]: time="2025-05-14T18:06:50.153708260Z" level=info msg="CreateContainer within sandbox \"3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc\"" May 14 18:06:50.154195 containerd[1584]: time="2025-05-14T18:06:50.154166642Z" level=info msg="StartContainer for \"294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc\"" May 14 18:06:50.158167 containerd[1584]: time="2025-05-14T18:06:50.158112887Z" level=info msg="connecting to shim 294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc" address="unix:///run/containerd/s/f4fdce3e2d36379d38a670f23e54376fe5aac994e242560464a869d6d58ef7b3" protocol=ttrpc version=3 May 14 18:06:50.185097 systemd[1]: Started cri-containerd-294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc.scope - libcontainer container 294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc. May 14 18:06:50.242075 containerd[1584]: time="2025-05-14T18:06:50.241999792Z" level=info msg="StartContainer for \"294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc\" returns successfully" May 14 18:06:50.672347 kubelet[2683]: E0514 18:06:50.672273 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9xcz" podUID="d88c5e05-4528-4641-b8cd-52f4b84e6088" May 14 18:06:50.749099 kubelet[2683]: I0514 18:06:50.749014 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c7bc698f5-2s6s8" podStartSLOduration=7.321686329 podStartE2EDuration="10.748997702s" podCreationTimestamp="2025-05-14 18:06:40 +0000 UTC" firstStartedPulling="2025-05-14 18:06:40.663073711 +0000 UTC m=+11.080110365" lastFinishedPulling="2025-05-14 18:06:44.090385084 +0000 UTC m=+14.507421738" observedRunningTime="2025-05-14 18:06:44.72965241 +0000 UTC m=+15.146689154" watchObservedRunningTime="2025-05-14 18:06:50.748997702 +0000 UTC m=+21.166034356" May 14 18:06:51.406496 containerd[1584]: time="2025-05-14T18:06:51.406433326Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:06:51.409523 systemd[1]: cri-containerd-294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc.scope: Deactivated successfully. May 14 18:06:51.410027 systemd[1]: cri-containerd-294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc.scope: Consumed 512ms CPU time, 160.8M memory peak, 44K read from disk, 154M written to disk. May 14 18:06:51.410624 containerd[1584]: time="2025-05-14T18:06:51.410588296Z" level=info msg="received exit event container_id:\"294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc\" id:\"294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc\" pid:3354 exited_at:{seconds:1747246011 nanos:410302710}" May 14 18:06:51.410718 containerd[1584]: time="2025-05-14T18:06:51.410670951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc\" id:\"294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc\" pid:3354 exited_at:{seconds:1747246011 nanos:410302710}" May 14 18:06:51.433656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-294929dfe2c3992dec954329cfe0cd581ed593026c02c30012adbcd2adedebdc-rootfs.mount: Deactivated successfully. May 14 18:06:51.438783 kubelet[2683]: I0514 18:06:51.438754 2683 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 18:06:51.473639 systemd[1]: Created slice kubepods-besteffort-podc14e1e58_66d7_4ca0_8c8c_e9ce8b9d90c1.slice - libcontainer container kubepods-besteffort-podc14e1e58_66d7_4ca0_8c8c_e9ce8b9d90c1.slice. May 14 18:06:51.480084 systemd[1]: Created slice kubepods-besteffort-pod12e75302_7522_48c2_b49c_260937f5c2a2.slice - libcontainer container kubepods-besteffort-pod12e75302_7522_48c2_b49c_260937f5c2a2.slice. May 14 18:06:51.487205 systemd[1]: Created slice kubepods-burstable-podc94082e6_1a6c_415b_a039_c134ece01d17.slice - libcontainer container kubepods-burstable-podc94082e6_1a6c_415b_a039_c134ece01d17.slice. May 14 18:06:51.493275 systemd[1]: Created slice kubepods-burstable-pod9f3fee04_17e6_4d93_bbcd_55e6b6f25d29.slice - libcontainer container kubepods-burstable-pod9f3fee04_17e6_4d93_bbcd_55e6b6f25d29.slice. May 14 18:06:51.498112 systemd[1]: Created slice kubepods-besteffort-pod530563e4_43c9_4459_9d56_5d783bccbb99.slice - libcontainer container kubepods-besteffort-pod530563e4_43c9_4459_9d56_5d783bccbb99.slice. May 14 18:06:51.507122 kubelet[2683]: I0514 18:06:51.507086 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gccz\" (UniqueName: \"kubernetes.io/projected/12e75302-7522-48c2-b49c-260937f5c2a2-kube-api-access-2gccz\") pod \"calico-kube-controllers-57664f55bc-5xsfb\" (UID: \"12e75302-7522-48c2-b49c-260937f5c2a2\") " pod="calico-system/calico-kube-controllers-57664f55bc-5xsfb" May 14 18:06:51.507122 kubelet[2683]: I0514 18:06:51.507120 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/530563e4-43c9-4459-9d56-5d783bccbb99-calico-apiserver-certs\") pod \"calico-apiserver-77698d8d79-b9jcf\" (UID: \"530563e4-43c9-4459-9d56-5d783bccbb99\") " pod="calico-apiserver/calico-apiserver-77698d8d79-b9jcf" May 14 18:06:51.507285 kubelet[2683]: I0514 18:06:51.507139 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh469\" (UniqueName: \"kubernetes.io/projected/c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1-kube-api-access-dh469\") pod \"calico-apiserver-77698d8d79-wxx2d\" (UID: \"c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1\") " pod="calico-apiserver/calico-apiserver-77698d8d79-wxx2d" May 14 18:06:51.507285 kubelet[2683]: I0514 18:06:51.507156 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12e75302-7522-48c2-b49c-260937f5c2a2-tigera-ca-bundle\") pod \"calico-kube-controllers-57664f55bc-5xsfb\" (UID: \"12e75302-7522-48c2-b49c-260937f5c2a2\") " pod="calico-system/calico-kube-controllers-57664f55bc-5xsfb" May 14 18:06:51.507285 kubelet[2683]: I0514 18:06:51.507172 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwktw\" (UniqueName: \"kubernetes.io/projected/9f3fee04-17e6-4d93-bbcd-55e6b6f25d29-kube-api-access-cwktw\") pod \"coredns-6f6b679f8f-xh4fw\" (UID: \"9f3fee04-17e6-4d93-bbcd-55e6b6f25d29\") " pod="kube-system/coredns-6f6b679f8f-xh4fw" May 14 18:06:51.507285 kubelet[2683]: I0514 18:06:51.507188 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c94082e6-1a6c-415b-a039-c134ece01d17-config-volume\") pod \"coredns-6f6b679f8f-5bxtb\" (UID: \"c94082e6-1a6c-415b-a039-c134ece01d17\") " pod="kube-system/coredns-6f6b679f8f-5bxtb" May 14 18:06:51.507285 kubelet[2683]: I0514 18:06:51.507204 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x52w\" (UniqueName: \"kubernetes.io/projected/530563e4-43c9-4459-9d56-5d783bccbb99-kube-api-access-7x52w\") pod \"calico-apiserver-77698d8d79-b9jcf\" (UID: \"530563e4-43c9-4459-9d56-5d783bccbb99\") " pod="calico-apiserver/calico-apiserver-77698d8d79-b9jcf" May 14 18:06:51.507405 kubelet[2683]: I0514 18:06:51.507221 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1-calico-apiserver-certs\") pod \"calico-apiserver-77698d8d79-wxx2d\" (UID: \"c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1\") " pod="calico-apiserver/calico-apiserver-77698d8d79-wxx2d" May 14 18:06:51.507405 kubelet[2683]: I0514 18:06:51.507239 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3fee04-17e6-4d93-bbcd-55e6b6f25d29-config-volume\") pod \"coredns-6f6b679f8f-xh4fw\" (UID: \"9f3fee04-17e6-4d93-bbcd-55e6b6f25d29\") " pod="kube-system/coredns-6f6b679f8f-xh4fw" May 14 18:06:51.507405 kubelet[2683]: I0514 18:06:51.507335 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpd25\" (UniqueName: \"kubernetes.io/projected/c94082e6-1a6c-415b-a039-c134ece01d17-kube-api-access-bpd25\") pod \"coredns-6f6b679f8f-5bxtb\" (UID: \"c94082e6-1a6c-415b-a039-c134ece01d17\") " pod="kube-system/coredns-6f6b679f8f-5bxtb" May 14 18:06:51.786247 containerd[1584]: time="2025-05-14T18:06:51.786201554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77698d8d79-wxx2d,Uid:c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1,Namespace:calico-apiserver,Attempt:0,}" May 14 18:06:51.787959 containerd[1584]: time="2025-05-14T18:06:51.787919692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57664f55bc-5xsfb,Uid:12e75302-7522-48c2-b49c-260937f5c2a2,Namespace:calico-system,Attempt:0,}" May 14 18:06:51.791482 containerd[1584]: time="2025-05-14T18:06:51.791444602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5bxtb,Uid:c94082e6-1a6c-415b-a039-c134ece01d17,Namespace:kube-system,Attempt:0,}" May 14 18:06:51.796101 containerd[1584]: time="2025-05-14T18:06:51.796069549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xh4fw,Uid:9f3fee04-17e6-4d93-bbcd-55e6b6f25d29,Namespace:kube-system,Attempt:0,}" May 14 18:06:51.800770 containerd[1584]: time="2025-05-14T18:06:51.800718807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77698d8d79-b9jcf,Uid:530563e4-43c9-4459-9d56-5d783bccbb99,Namespace:calico-apiserver,Attempt:0,}" May 14 18:06:52.031885 containerd[1584]: time="2025-05-14T18:06:52.031665515Z" level=error msg="Failed to destroy network for sandbox \"8e124deaed16baaf7a493e0c09edbecb765b1d79a53edefc7439d38fdc946d6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.033859 containerd[1584]: time="2025-05-14T18:06:52.033060326Z" level=error msg="Failed to destroy network for sandbox \"bec91922395a55c049d0fd39e5d09774cccf451076bf99f16b9e0d7aff507b4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.034038 containerd[1584]: time="2025-05-14T18:06:52.033986914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57664f55bc-5xsfb,Uid:12e75302-7522-48c2-b49c-260937f5c2a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e124deaed16baaf7a493e0c09edbecb765b1d79a53edefc7439d38fdc946d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.034232 containerd[1584]: time="2025-05-14T18:06:52.034218025Z" level=error msg="Failed to destroy network for sandbox \"bd40c832ac3d4e832a158eb33a08b2f2a68207f6d662f3532155983d6c7f238c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.035079 kubelet[2683]: E0514 18:06:52.035025 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e124deaed16baaf7a493e0c09edbecb765b1d79a53edefc7439d38fdc946d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.035463 kubelet[2683]: E0514 18:06:52.035104 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e124deaed16baaf7a493e0c09edbecb765b1d79a53edefc7439d38fdc946d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57664f55bc-5xsfb" May 14 18:06:52.035463 kubelet[2683]: E0514 18:06:52.035124 2683 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e124deaed16baaf7a493e0c09edbecb765b1d79a53edefc7439d38fdc946d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57664f55bc-5xsfb" May 14 18:06:52.035463 kubelet[2683]: E0514 18:06:52.035177 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57664f55bc-5xsfb_calico-system(12e75302-7522-48c2-b49c-260937f5c2a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57664f55bc-5xsfb_calico-system(12e75302-7522-48c2-b49c-260937f5c2a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e124deaed16baaf7a493e0c09edbecb765b1d79a53edefc7439d38fdc946d6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57664f55bc-5xsfb" podUID="12e75302-7522-48c2-b49c-260937f5c2a2" May 14 18:06:52.035577 containerd[1584]: time="2025-05-14T18:06:52.035402138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5bxtb,Uid:c94082e6-1a6c-415b-a039-c134ece01d17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec91922395a55c049d0fd39e5d09774cccf451076bf99f16b9e0d7aff507b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.035718 kubelet[2683]: E0514 18:06:52.035664 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec91922395a55c049d0fd39e5d09774cccf451076bf99f16b9e0d7aff507b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.035755 kubelet[2683]: E0514 18:06:52.035742 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec91922395a55c049d0fd39e5d09774cccf451076bf99f16b9e0d7aff507b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5bxtb" May 14 18:06:52.035778 kubelet[2683]: E0514 18:06:52.035759 2683 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec91922395a55c049d0fd39e5d09774cccf451076bf99f16b9e0d7aff507b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5bxtb" May 14 18:06:52.035864 kubelet[2683]: E0514 18:06:52.035821 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5bxtb_kube-system(c94082e6-1a6c-415b-a039-c134ece01d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5bxtb_kube-system(c94082e6-1a6c-415b-a039-c134ece01d17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bec91922395a55c049d0fd39e5d09774cccf451076bf99f16b9e0d7aff507b4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5bxtb" podUID="c94082e6-1a6c-415b-a039-c134ece01d17" May 14 18:06:52.037018 containerd[1584]: time="2025-05-14T18:06:52.036782024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77698d8d79-wxx2d,Uid:c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd40c832ac3d4e832a158eb33a08b2f2a68207f6d662f3532155983d6c7f238c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.037179 kubelet[2683]: E0514 18:06:52.037083 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd40c832ac3d4e832a158eb33a08b2f2a68207f6d662f3532155983d6c7f238c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.037179 kubelet[2683]: E0514 18:06:52.037141 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd40c832ac3d4e832a158eb33a08b2f2a68207f6d662f3532155983d6c7f238c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77698d8d79-wxx2d" May 14 18:06:52.037179 kubelet[2683]: E0514 18:06:52.037162 2683 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd40c832ac3d4e832a158eb33a08b2f2a68207f6d662f3532155983d6c7f238c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77698d8d79-wxx2d" May 14 18:06:52.037358 kubelet[2683]: E0514 18:06:52.037212 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77698d8d79-wxx2d_calico-apiserver(c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77698d8d79-wxx2d_calico-apiserver(c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd40c832ac3d4e832a158eb33a08b2f2a68207f6d662f3532155983d6c7f238c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77698d8d79-wxx2d" podUID="c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1" May 14 18:06:52.047931 containerd[1584]: time="2025-05-14T18:06:52.047892117Z" level=error msg="Failed to destroy network for sandbox \"578796c5b19917824ad19fd3a64aaedd546571e8f66fdcd9318fd415915f371e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.048516 containerd[1584]: time="2025-05-14T18:06:52.048463408Z" level=error msg="Failed to destroy network for sandbox \"16b740a4a7efb5c03601e96376c7bb8e35efb67e796d309df61b5b7275e984a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.049156 containerd[1584]: time="2025-05-14T18:06:52.049115875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xh4fw,Uid:9f3fee04-17e6-4d93-bbcd-55e6b6f25d29,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"578796c5b19917824ad19fd3a64aaedd546571e8f66fdcd9318fd415915f371e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.049386 kubelet[2683]: E0514 18:06:52.049341 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578796c5b19917824ad19fd3a64aaedd546571e8f66fdcd9318fd415915f371e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.049490 kubelet[2683]: E0514 18:06:52.049443 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578796c5b19917824ad19fd3a64aaedd546571e8f66fdcd9318fd415915f371e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xh4fw" May 14 18:06:52.049535 kubelet[2683]: E0514 18:06:52.049497 2683 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578796c5b19917824ad19fd3a64aaedd546571e8f66fdcd9318fd415915f371e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xh4fw" May 14 18:06:52.049563 kubelet[2683]: E0514 18:06:52.049548 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xh4fw_kube-system(9f3fee04-17e6-4d93-bbcd-55e6b6f25d29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xh4fw_kube-system(9f3fee04-17e6-4d93-bbcd-55e6b6f25d29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"578796c5b19917824ad19fd3a64aaedd546571e8f66fdcd9318fd415915f371e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xh4fw" podUID="9f3fee04-17e6-4d93-bbcd-55e6b6f25d29" May 14 18:06:52.050166 containerd[1584]: time="2025-05-14T18:06:52.050108171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77698d8d79-b9jcf,Uid:530563e4-43c9-4459-9d56-5d783bccbb99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b740a4a7efb5c03601e96376c7bb8e35efb67e796d309df61b5b7275e984a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.050372 kubelet[2683]: E0514 18:06:52.050335 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b740a4a7efb5c03601e96376c7bb8e35efb67e796d309df61b5b7275e984a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.050428 kubelet[2683]: E0514 18:06:52.050387 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b740a4a7efb5c03601e96376c7bb8e35efb67e796d309df61b5b7275e984a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77698d8d79-b9jcf" May 14 18:06:52.050428 kubelet[2683]: E0514 18:06:52.050406 2683 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b740a4a7efb5c03601e96376c7bb8e35efb67e796d309df61b5b7275e984a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77698d8d79-b9jcf" May 14 18:06:52.050473 kubelet[2683]: E0514 18:06:52.050444 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77698d8d79-b9jcf_calico-apiserver(530563e4-43c9-4459-9d56-5d783bccbb99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77698d8d79-b9jcf_calico-apiserver(530563e4-43c9-4459-9d56-5d783bccbb99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16b740a4a7efb5c03601e96376c7bb8e35efb67e796d309df61b5b7275e984a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77698d8d79-b9jcf" podUID="530563e4-43c9-4459-9d56-5d783bccbb99" May 14 18:06:52.677191 systemd[1]: Created slice kubepods-besteffort-podd88c5e05_4528_4641_b8cd_52f4b84e6088.slice - libcontainer container kubepods-besteffort-podd88c5e05_4528_4641_b8cd_52f4b84e6088.slice. May 14 18:06:52.679497 containerd[1584]: time="2025-05-14T18:06:52.679463379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9xcz,Uid:d88c5e05-4528-4641-b8cd-52f4b84e6088,Namespace:calico-system,Attempt:0,}" May 14 18:06:52.734542 containerd[1584]: time="2025-05-14T18:06:52.734472328Z" level=error msg="Failed to destroy network for sandbox \"8ff479d58858203e5dd49d5b15b21a7bea5ee3a0ad58a1d802ed6f9b8926e116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.736128 containerd[1584]: time="2025-05-14T18:06:52.735993467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9xcz,Uid:d88c5e05-4528-4641-b8cd-52f4b84e6088,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff479d58858203e5dd49d5b15b21a7bea5ee3a0ad58a1d802ed6f9b8926e116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.736329 kubelet[2683]: E0514 18:06:52.736212 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff479d58858203e5dd49d5b15b21a7bea5ee3a0ad58a1d802ed6f9b8926e116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:52.736329 kubelet[2683]: E0514 18:06:52.736264 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff479d58858203e5dd49d5b15b21a7bea5ee3a0ad58a1d802ed6f9b8926e116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9xcz" May 14 18:06:52.736329 kubelet[2683]: E0514 18:06:52.736282 2683 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff479d58858203e5dd49d5b15b21a7bea5ee3a0ad58a1d802ed6f9b8926e116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9xcz" May 14 18:06:52.736441 kubelet[2683]: E0514 18:06:52.736315 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9xcz_calico-system(d88c5e05-4528-4641-b8cd-52f4b84e6088)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9xcz_calico-system(d88c5e05-4528-4641-b8cd-52f4b84e6088)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ff479d58858203e5dd49d5b15b21a7bea5ee3a0ad58a1d802ed6f9b8926e116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9xcz" podUID="d88c5e05-4528-4641-b8cd-52f4b84e6088" May 14 18:06:52.737226 systemd[1]: run-netns-cni\x2d896676dc\x2d1e75\x2da0ac\x2dd38b\x2d17dd9f2e1ed3.mount: Deactivated successfully. May 14 18:06:52.742453 containerd[1584]: time="2025-05-14T18:06:52.742416760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 18:06:56.534503 kubelet[2683]: I0514 18:06:56.534457 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:56.537732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17567707.mount: Deactivated successfully. May 14 18:06:57.510940 containerd[1584]: time="2025-05-14T18:06:57.510803897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:57.511913 containerd[1584]: time="2025-05-14T18:06:57.511866866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 18:06:57.513506 containerd[1584]: time="2025-05-14T18:06:57.513450827Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:57.538218 containerd[1584]: time="2025-05-14T18:06:57.538145877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:57.538610 containerd[1584]: time="2025-05-14T18:06:57.538567438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 4.7961095s" May 14 18:06:57.538610 containerd[1584]: time="2025-05-14T18:06:57.538602308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 18:06:57.547270 containerd[1584]: time="2025-05-14T18:06:57.547218015Z" level=info msg="CreateContainer within sandbox \"3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:06:57.559379 containerd[1584]: time="2025-05-14T18:06:57.559329335Z" level=info msg="Container de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:57.569941 containerd[1584]: time="2025-05-14T18:06:57.569910627Z" level=info msg="CreateContainer within sandbox \"3d62796402d88bca11134dc24f9dc80a7f9e84c4ec26679f5168ad458ad67bfe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098\"" May 14 18:06:57.570579 containerd[1584]: time="2025-05-14T18:06:57.570448238Z" level=info msg="StartContainer for \"de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098\"" May 14 18:06:57.572240 containerd[1584]: time="2025-05-14T18:06:57.572173069Z" level=info msg="connecting to shim de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098" address="unix:///run/containerd/s/f4fdce3e2d36379d38a670f23e54376fe5aac994e242560464a869d6d58ef7b3" protocol=ttrpc version=3 May 14 18:06:57.595002 systemd[1]: Started cri-containerd-de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098.scope - libcontainer container de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098. May 14 18:06:57.640029 containerd[1584]: time="2025-05-14T18:06:57.639991903Z" level=info msg="StartContainer for \"de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098\" returns successfully" May 14 18:06:57.705269 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 18:06:57.705385 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 18:06:57.839599 containerd[1584]: time="2025-05-14T18:06:57.839436175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098\" id:\"6a76778f71eec0d150daec847ac68228404f6af5d4832f7f8eb6a1feeacfa494\" pid:3685 exit_status:1 exited_at:{seconds:1747246017 nanos:839140219}" May 14 18:06:58.460150 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:38586.service - OpenSSH per-connection server daemon (10.0.0.1:38586). May 14 18:06:58.545291 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 38586 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:06:58.547109 sshd-session[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:58.552171 systemd-logind[1566]: New session 8 of user core. May 14 18:06:58.562006 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:06:58.687874 sshd[3723]: Connection closed by 10.0.0.1 port 38586 May 14 18:06:58.688167 sshd-session[3721]: pam_unix(sshd:session): session closed for user core May 14 18:06:58.693168 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:38586.service: Deactivated successfully. May 14 18:06:58.695241 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:06:58.696107 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. May 14 18:06:58.697473 systemd-logind[1566]: Removed session 8. May 14 18:06:58.819355 containerd[1584]: time="2025-05-14T18:06:58.819233143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098\" id:\"78a3e932b8c8442cfd855ae1cf6c302bb0485ebf989f63619a41a841107cec63\" pid:3750 exit_status:1 exited_at:{seconds:1747246018 nanos:818910577}" May 14 18:06:59.730307 systemd-networkd[1490]: vxlan.calico: Link UP May 14 18:06:59.730316 systemd-networkd[1490]: vxlan.calico: Gained carrier May 14 18:07:00.893038 systemd-networkd[1490]: vxlan.calico: Gained IPv6LL May 14 18:07:03.671835 containerd[1584]: time="2025-05-14T18:07:03.671787259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9xcz,Uid:d88c5e05-4528-4641-b8cd-52f4b84e6088,Namespace:calico-system,Attempt:0,}" May 14 18:07:03.699964 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:46614.service - OpenSSH per-connection server daemon (10.0.0.1:46614). May 14 18:07:03.751604 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 46614 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:03.753139 sshd-session[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:03.758119 systemd-logind[1566]: New session 9 of user core. May 14 18:07:03.766987 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:07:03.808306 systemd-networkd[1490]: cali1295d9cd0e8: Link UP May 14 18:07:03.809129 systemd-networkd[1490]: cali1295d9cd0e8: Gained carrier May 14 18:07:03.819013 kubelet[2683]: I0514 18:07:03.818665 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4xpsz" podStartSLOduration=6.939527707 podStartE2EDuration="23.818648933s" podCreationTimestamp="2025-05-14 18:06:40 +0000 UTC" firstStartedPulling="2025-05-14 18:06:40.660050847 +0000 UTC m=+11.077087501" lastFinishedPulling="2025-05-14 18:06:57.539172073 +0000 UTC m=+27.956208727" observedRunningTime="2025-05-14 18:06:57.801433808 +0000 UTC m=+28.218470482" watchObservedRunningTime="2025-05-14 18:07:03.818648933 +0000 UTC m=+34.235685588" May 14 18:07:03.825731 containerd[1584]: 2025-05-14 18:07:03.711 [INFO][3963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--l9xcz-eth0 csi-node-driver- calico-system d88c5e05-4528-4641-b8cd-52f4b84e6088 609 0 2025-05-14 18:06:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-l9xcz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1295d9cd0e8 [] []}} ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Namespace="calico-system" Pod="csi-node-driver-l9xcz" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9xcz-" May 14 18:07:03.825731 containerd[1584]: 2025-05-14 18:07:03.712 [INFO][3963] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Namespace="calico-system" Pod="csi-node-driver-l9xcz" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9xcz-eth0" May 14 18:07:03.825731 containerd[1584]: 2025-05-14 18:07:03.772 [INFO][3980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" HandleID="k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Workload="localhost-k8s-csi--node--driver--l9xcz-eth0" May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.781 [INFO][3980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" HandleID="k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Workload="localhost-k8s-csi--node--driver--l9xcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000241670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-l9xcz", "timestamp":"2025-05-14 18:07:03.772377385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.781 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.781 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.781 [INFO][3980] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.783 [INFO][3980] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" host="localhost" May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.788 [INFO][3980] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.792 [INFO][3980] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.794 [INFO][3980] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.795 [INFO][3980] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:07:03.826118 containerd[1584]: 2025-05-14 18:07:03.795 [INFO][3980] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" host="localhost" May 14 18:07:03.826877 containerd[1584]: 2025-05-14 18:07:03.796 [INFO][3980] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559 May 14 18:07:03.826877 containerd[1584]: 2025-05-14 18:07:03.799 [INFO][3980] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" host="localhost" May 14 18:07:03.826877 containerd[1584]: 2025-05-14 18:07:03.803 [INFO][3980] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" host="localhost" May 14 18:07:03.826877 containerd[1584]: 2025-05-14 18:07:03.803 [INFO][3980] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" host="localhost" May 14 18:07:03.826877 containerd[1584]: 2025-05-14 18:07:03.803 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:07:03.826877 containerd[1584]: 2025-05-14 18:07:03.803 [INFO][3980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" HandleID="k8s-pod-network.9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Workload="localhost-k8s-csi--node--driver--l9xcz-eth0" May 14 18:07:03.827015 containerd[1584]: 2025-05-14 18:07:03.805 [INFO][3963] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Namespace="calico-system" Pod="csi-node-driver-l9xcz" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9xcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l9xcz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d88c5e05-4528-4641-b8cd-52f4b84e6088", ResourceVersion:"609", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-l9xcz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1295d9cd0e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:03.827015 containerd[1584]: 2025-05-14 18:07:03.805 [INFO][3963] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Namespace="calico-system" Pod="csi-node-driver-l9xcz" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9xcz-eth0" May 14 18:07:03.827100 containerd[1584]: 2025-05-14 18:07:03.805 [INFO][3963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1295d9cd0e8 ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Namespace="calico-system" Pod="csi-node-driver-l9xcz" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9xcz-eth0" May 14 18:07:03.827100 containerd[1584]: 2025-05-14 18:07:03.808 [INFO][3963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Namespace="calico-system" Pod="csi-node-driver-l9xcz" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9xcz-eth0" May 14 18:07:03.827152 containerd[1584]: 2025-05-14 18:07:03.809 [INFO][3963] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Namespace="calico-system" Pod="csi-node-driver-l9xcz" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9xcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l9xcz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d88c5e05-4528-4641-b8cd-52f4b84e6088", ResourceVersion:"609", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559", Pod:"csi-node-driver-l9xcz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1295d9cd0e8", MAC:"be:c4:58:fe:4c:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:03.827220 containerd[1584]: 2025-05-14 18:07:03.820 [INFO][3963] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" Namespace="calico-system" Pod="csi-node-driver-l9xcz" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9xcz-eth0" May 14 18:07:03.997029 sshd[3989]: Connection closed by 10.0.0.1 port 46614 May 14 18:07:03.997375 sshd-session[3977]: pam_unix(sshd:session): session closed for user core May 14 18:07:04.001880 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:46614.service: Deactivated successfully. May 14 18:07:04.003909 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:07:04.004751 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. May 14 18:07:04.006078 systemd-logind[1566]: Removed session 9. May 14 18:07:04.453415 containerd[1584]: time="2025-05-14T18:07:04.453367084Z" level=info msg="connecting to shim 9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559" address="unix:///run/containerd/s/4a460d5cd36a9332890e8fb75f4565f67a3ba3b8e66097f9145f332a5d506168" namespace=k8s.io protocol=ttrpc version=3 May 14 18:07:04.486072 systemd[1]: Started cri-containerd-9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559.scope - libcontainer container 9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559. May 14 18:07:04.497592 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:07:04.532917 containerd[1584]: time="2025-05-14T18:07:04.532869867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9xcz,Uid:d88c5e05-4528-4641-b8cd-52f4b84e6088,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559\"" May 14 18:07:04.534144 containerd[1584]: time="2025-05-14T18:07:04.534102221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 18:07:04.672264 containerd[1584]: time="2025-05-14T18:07:04.672219297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xh4fw,Uid:9f3fee04-17e6-4d93-bbcd-55e6b6f25d29,Namespace:kube-system,Attempt:0,}" May 14 18:07:04.672833 containerd[1584]: time="2025-05-14T18:07:04.672239583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5bxtb,Uid:c94082e6-1a6c-415b-a039-c134ece01d17,Namespace:kube-system,Attempt:0,}" May 14 18:07:04.672992 containerd[1584]: time="2025-05-14T18:07:04.672299660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77698d8d79-wxx2d,Uid:c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1,Namespace:calico-apiserver,Attempt:0,}" May 14 18:07:04.790491 systemd-networkd[1490]: cali50f314c037c: Link UP May 14 18:07:04.791878 systemd-networkd[1490]: cali50f314c037c: Gained carrier May 14 18:07:04.805230 containerd[1584]: 2025-05-14 18:07:04.720 [INFO][4085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0 calico-apiserver-77698d8d79- calico-apiserver c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1 692 0 2025-05-14 18:06:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77698d8d79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77698d8d79-wxx2d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali50f314c037c [] []}} ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-wxx2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-" May 14 18:07:04.805230 containerd[1584]: 2025-05-14 18:07:04.720 [INFO][4085] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-wxx2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" May 14 18:07:04.805230 containerd[1584]: 2025-05-14 18:07:04.750 [INFO][4108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" HandleID="k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Workload="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.760 [INFO][4108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" HandleID="k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Workload="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003755e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77698d8d79-wxx2d", "timestamp":"2025-05-14 18:07:04.750892582 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.760 [INFO][4108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.761 [INFO][4108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.761 [INFO][4108] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.763 [INFO][4108] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" host="localhost" May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.768 [INFO][4108] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.771 [INFO][4108] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.773 [INFO][4108] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.775 [INFO][4108] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:07:04.805477 containerd[1584]: 2025-05-14 18:07:04.775 [INFO][4108] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" host="localhost" May 14 18:07:04.805702 containerd[1584]: 2025-05-14 18:07:04.776 [INFO][4108] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6 May 14 18:07:04.805702 containerd[1584]: 2025-05-14 18:07:04.779 [INFO][4108] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" host="localhost" May 14 18:07:04.805702 containerd[1584]: 2025-05-14 18:07:04.784 [INFO][4108] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" host="localhost" May 14 18:07:04.805702 containerd[1584]: 2025-05-14 18:07:04.784 [INFO][4108] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" host="localhost" May 14 18:07:04.805702 containerd[1584]: 2025-05-14 18:07:04.784 [INFO][4108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:07:04.805702 containerd[1584]: 2025-05-14 18:07:04.784 [INFO][4108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" HandleID="k8s-pod-network.a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Workload="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" May 14 18:07:04.805818 containerd[1584]: 2025-05-14 18:07:04.788 [INFO][4085] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-wxx2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0", GenerateName:"calico-apiserver-77698d8d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77698d8d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77698d8d79-wxx2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50f314c037c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:04.805893 containerd[1584]: 2025-05-14 18:07:04.788 [INFO][4085] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-wxx2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" May 14 18:07:04.805893 containerd[1584]: 2025-05-14 18:07:04.788 [INFO][4085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50f314c037c ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-wxx2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" May 14 18:07:04.805893 containerd[1584]: 2025-05-14 18:07:04.791 [INFO][4085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-wxx2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" May 14 18:07:04.805978 containerd[1584]: 2025-05-14 18:07:04.791 [INFO][4085] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-wxx2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0", GenerateName:"calico-apiserver-77698d8d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77698d8d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6", Pod:"calico-apiserver-77698d8d79-wxx2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50f314c037c", MAC:"d6:71:1b:f1:4d:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:04.806043 containerd[1584]: 2025-05-14 18:07:04.802 [INFO][4085] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-wxx2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--wxx2d-eth0" May 14 18:07:04.833102 containerd[1584]: time="2025-05-14T18:07:04.833039820Z" level=info msg="connecting to shim a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6" address="unix:///run/containerd/s/9b800791adcb2eca2401bc3c47ebc1690182791e25e96c96370e2a0ee4e8b4ad" namespace=k8s.io protocol=ttrpc version=3 May 14 18:07:04.861026 systemd-networkd[1490]: cali1295d9cd0e8: Gained IPv6LL May 14 18:07:04.864148 systemd[1]: Started cri-containerd-a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6.scope - libcontainer container a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6. May 14 18:07:04.881056 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:07:04.900990 systemd-networkd[1490]: calie2187b373f8: Link UP May 14 18:07:04.903071 systemd-networkd[1490]: calie2187b373f8: Gained carrier May 14 18:07:04.916024 containerd[1584]: 2025-05-14 18:07:04.725 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0 coredns-6f6b679f8f- kube-system 9f3fee04-17e6-4d93-bbcd-55e6b6f25d29 700 0 2025-05-14 18:06:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-xh4fw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2187b373f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-xh4fw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xh4fw-" May 14 18:07:04.916024 containerd[1584]: 2025-05-14 18:07:04.725 [INFO][4064] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-xh4fw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" May 14 18:07:04.916024 containerd[1584]: 2025-05-14 18:07:04.757 [INFO][4121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" HandleID="k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Workload="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.764 [INFO][4121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" HandleID="k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Workload="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003629d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-xh4fw", "timestamp":"2025-05-14 18:07:04.757438196 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.765 [INFO][4121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.785 [INFO][4121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.785 [INFO][4121] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.865 [INFO][4121] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" host="localhost" May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.870 [INFO][4121] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.874 [INFO][4121] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.875 [INFO][4121] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.877 [INFO][4121] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:07:04.916259 containerd[1584]: 2025-05-14 18:07:04.878 [INFO][4121] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" host="localhost" May 14 18:07:04.916480 containerd[1584]: 2025-05-14 18:07:04.880 [INFO][4121] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4 May 14 18:07:04.916480 containerd[1584]: 2025-05-14 18:07:04.884 [INFO][4121] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" host="localhost" May 14 18:07:04.916480 containerd[1584]: 2025-05-14 18:07:04.892 [INFO][4121] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" host="localhost" May 14 18:07:04.916480 containerd[1584]: 2025-05-14 18:07:04.892 [INFO][4121] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" host="localhost" May 14 18:07:04.916480 containerd[1584]: 2025-05-14 18:07:04.892 [INFO][4121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:07:04.916480 containerd[1584]: 2025-05-14 18:07:04.892 [INFO][4121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" HandleID="k8s-pod-network.127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Workload="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" May 14 18:07:04.916594 containerd[1584]: 2025-05-14 18:07:04.896 [INFO][4064] cni-plugin/k8s.go 386: Populated endpoint ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-xh4fw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9f3fee04-17e6-4d93-bbcd-55e6b6f25d29", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-xh4fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2187b373f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:04.916648 containerd[1584]: 2025-05-14 18:07:04.896 [INFO][4064] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-xh4fw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" May 14 18:07:04.916648 containerd[1584]: 2025-05-14 18:07:04.896 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2187b373f8 ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-xh4fw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" May 14 18:07:04.916648 containerd[1584]: 2025-05-14 18:07:04.903 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-xh4fw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" May 14 18:07:04.916728 containerd[1584]: 2025-05-14 18:07:04.904 [INFO][4064] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-xh4fw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9f3fee04-17e6-4d93-bbcd-55e6b6f25d29", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4", Pod:"coredns-6f6b679f8f-xh4fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2187b373f8", MAC:"36:51:2a:10:8f:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:04.916728 containerd[1584]: 2025-05-14 18:07:04.913 [INFO][4064] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-xh4fw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xh4fw-eth0" May 14 18:07:04.925600 containerd[1584]: time="2025-05-14T18:07:04.925553359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77698d8d79-wxx2d,Uid:c14e1e58-66d7-4ca0-8c8c-e9ce8b9d90c1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6\"" May 14 18:07:04.948132 containerd[1584]: time="2025-05-14T18:07:04.948085420Z" level=info msg="connecting to shim 127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4" address="unix:///run/containerd/s/630d02b5a4cbc03a81874c17142ffe76f328179574e0a509239d6e016d7f20ea" namespace=k8s.io protocol=ttrpc version=3 May 14 18:07:04.979194 systemd[1]: Started cri-containerd-127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4.scope - libcontainer container 127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4. May 14 18:07:04.992904 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:07:04.998617 systemd-networkd[1490]: cali672c5f6232e: Link UP May 14 18:07:04.999327 systemd-networkd[1490]: cali672c5f6232e: Gained carrier May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.721 [INFO][4078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0 coredns-6f6b679f8f- kube-system c94082e6-1a6c-415b-a039-c134ece01d17 696 0 2025-05-14 18:06:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-5bxtb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali672c5f6232e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Namespace="kube-system" Pod="coredns-6f6b679f8f-5bxtb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5bxtb-" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.721 [INFO][4078] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Namespace="kube-system" Pod="coredns-6f6b679f8f-5bxtb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.759 [INFO][4110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" HandleID="k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Workload="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.765 [INFO][4110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" HandleID="k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Workload="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005238f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-5bxtb", "timestamp":"2025-05-14 18:07:04.759250296 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.766 [INFO][4110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.892 [INFO][4110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.892 [INFO][4110] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.965 [INFO][4110] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.971 [INFO][4110] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.975 [INFO][4110] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.977 [INFO][4110] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.980 [INFO][4110] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.980 [INFO][4110] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.981 [INFO][4110] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32 May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.985 [INFO][4110] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.991 [INFO][4110] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.991 [INFO][4110] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" host="localhost" May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.991 [INFO][4110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:07:05.014158 containerd[1584]: 2025-05-14 18:07:04.991 [INFO][4110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" HandleID="k8s-pod-network.a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Workload="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" May 14 18:07:05.014908 containerd[1584]: 2025-05-14 18:07:04.995 [INFO][4078] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Namespace="kube-system" Pod="coredns-6f6b679f8f-5bxtb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c94082e6-1a6c-415b-a039-c134ece01d17", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-5bxtb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali672c5f6232e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:05.014908 containerd[1584]: 2025-05-14 18:07:04.996 [INFO][4078] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Namespace="kube-system" Pod="coredns-6f6b679f8f-5bxtb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" May 14 18:07:05.014908 containerd[1584]: 2025-05-14 18:07:04.996 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali672c5f6232e ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Namespace="kube-system" Pod="coredns-6f6b679f8f-5bxtb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" May 14 18:07:05.014908 containerd[1584]: 2025-05-14 18:07:04.999 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Namespace="kube-system" Pod="coredns-6f6b679f8f-5bxtb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" May 14 18:07:05.014908 containerd[1584]: 2025-05-14 18:07:04.999 [INFO][4078] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Namespace="kube-system" Pod="coredns-6f6b679f8f-5bxtb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c94082e6-1a6c-415b-a039-c134ece01d17", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32", Pod:"coredns-6f6b679f8f-5bxtb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali672c5f6232e", MAC:"76:0c:bd:18:82:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:05.014908 containerd[1584]: 2025-05-14 18:07:05.010 [INFO][4078] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" Namespace="kube-system" Pod="coredns-6f6b679f8f-5bxtb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5bxtb-eth0" May 14 18:07:05.033347 containerd[1584]: time="2025-05-14T18:07:05.033309189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xh4fw,Uid:9f3fee04-17e6-4d93-bbcd-55e6b6f25d29,Namespace:kube-system,Attempt:0,} returns sandbox id \"127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4\"" May 14 18:07:05.035714 containerd[1584]: time="2025-05-14T18:07:05.035678021Z" level=info msg="CreateContainer within sandbox \"127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:07:05.046013 containerd[1584]: time="2025-05-14T18:07:05.045356237Z" level=info msg="Container 42a99ef2e8406290313c6ad82a0c71a4670ce14946732f9ed01112f219cb2b27: CDI devices from CRI Config.CDIDevices: []" May 14 18:07:05.047967 containerd[1584]: time="2025-05-14T18:07:05.047946372Z" level=info msg="connecting to shim a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32" address="unix:///run/containerd/s/cfe20d3a0c9e1803fd216f1e69a915d4bc4435f79c78a18a3dac3b9223fc3c64" namespace=k8s.io protocol=ttrpc version=3 May 14 18:07:05.055047 containerd[1584]: time="2025-05-14T18:07:05.055000994Z" level=info msg="CreateContainer within sandbox \"127c0c8be4027221f1ee69e03747ba97543e028c797374f2941d72ae2b8fa1b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42a99ef2e8406290313c6ad82a0c71a4670ce14946732f9ed01112f219cb2b27\"" May 14 18:07:05.056154 containerd[1584]: time="2025-05-14T18:07:05.056117048Z" level=info msg="StartContainer for \"42a99ef2e8406290313c6ad82a0c71a4670ce14946732f9ed01112f219cb2b27\"" May 14 18:07:05.057162 containerd[1584]: time="2025-05-14T18:07:05.057134717Z" level=info msg="connecting to shim 42a99ef2e8406290313c6ad82a0c71a4670ce14946732f9ed01112f219cb2b27" address="unix:///run/containerd/s/630d02b5a4cbc03a81874c17142ffe76f328179574e0a509239d6e016d7f20ea" protocol=ttrpc version=3 May 14 18:07:05.079050 systemd[1]: Started cri-containerd-a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32.scope - libcontainer container a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32. May 14 18:07:05.082275 systemd[1]: Started cri-containerd-42a99ef2e8406290313c6ad82a0c71a4670ce14946732f9ed01112f219cb2b27.scope - libcontainer container 42a99ef2e8406290313c6ad82a0c71a4670ce14946732f9ed01112f219cb2b27. May 14 18:07:05.093033 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:07:05.122057 containerd[1584]: time="2025-05-14T18:07:05.121916378Z" level=info msg="StartContainer for \"42a99ef2e8406290313c6ad82a0c71a4670ce14946732f9ed01112f219cb2b27\" returns successfully" May 14 18:07:05.133288 containerd[1584]: time="2025-05-14T18:07:05.133233107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5bxtb,Uid:c94082e6-1a6c-415b-a039-c134ece01d17,Namespace:kube-system,Attempt:0,} returns sandbox id \"a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32\"" May 14 18:07:05.150455 containerd[1584]: time="2025-05-14T18:07:05.150405287Z" level=info msg="CreateContainer within sandbox \"a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:07:05.162697 containerd[1584]: time="2025-05-14T18:07:05.162640969Z" level=info msg="Container b5646d82305c7ccd603536653854cc0c46eb5251e62c109020377903321f5b61: CDI devices from CRI Config.CDIDevices: []" May 14 18:07:05.171462 containerd[1584]: time="2025-05-14T18:07:05.171416021Z" level=info msg="CreateContainer within sandbox \"a68e3cfa17085ad274eb6b821ea19e8bd0de950a455c0b740bac765b90708d32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5646d82305c7ccd603536653854cc0c46eb5251e62c109020377903321f5b61\"" May 14 18:07:05.172065 containerd[1584]: time="2025-05-14T18:07:05.171993707Z" level=info msg="StartContainer for \"b5646d82305c7ccd603536653854cc0c46eb5251e62c109020377903321f5b61\"" May 14 18:07:05.173989 containerd[1584]: time="2025-05-14T18:07:05.173943083Z" level=info msg="connecting to shim b5646d82305c7ccd603536653854cc0c46eb5251e62c109020377903321f5b61" address="unix:///run/containerd/s/cfe20d3a0c9e1803fd216f1e69a915d4bc4435f79c78a18a3dac3b9223fc3c64" protocol=ttrpc version=3 May 14 18:07:05.207162 systemd[1]: Started cri-containerd-b5646d82305c7ccd603536653854cc0c46eb5251e62c109020377903321f5b61.scope - libcontainer container b5646d82305c7ccd603536653854cc0c46eb5251e62c109020377903321f5b61. May 14 18:07:05.380561 containerd[1584]: time="2025-05-14T18:07:05.380433248Z" level=info msg="StartContainer for \"b5646d82305c7ccd603536653854cc0c46eb5251e62c109020377903321f5b61\" returns successfully" May 14 18:07:05.815131 kubelet[2683]: I0514 18:07:05.812697 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xh4fw" podStartSLOduration=31.812679208 podStartE2EDuration="31.812679208s" podCreationTimestamp="2025-05-14 18:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:07:05.810611962 +0000 UTC m=+36.227648616" watchObservedRunningTime="2025-05-14 18:07:05.812679208 +0000 UTC m=+36.229715862" May 14 18:07:05.815131 kubelet[2683]: I0514 18:07:05.813950 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5bxtb" podStartSLOduration=31.813941873 podStartE2EDuration="31.813941873s" podCreationTimestamp="2025-05-14 18:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:07:05.800315337 +0000 UTC m=+36.217352001" watchObservedRunningTime="2025-05-14 18:07:05.813941873 +0000 UTC m=+36.230978527" May 14 18:07:05.948507 containerd[1584]: time="2025-05-14T18:07:05.948447561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:05.949235 containerd[1584]: time="2025-05-14T18:07:05.949157684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 18:07:05.950216 containerd[1584]: time="2025-05-14T18:07:05.950183086Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:05.952097 containerd[1584]: time="2025-05-14T18:07:05.952061255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:05.952824 containerd[1584]: time="2025-05-14T18:07:05.952782848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.41861445s" May 14 18:07:05.952824 containerd[1584]: time="2025-05-14T18:07:05.952815466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 18:07:05.953738 containerd[1584]: time="2025-05-14T18:07:05.953701021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:07:05.955076 containerd[1584]: time="2025-05-14T18:07:05.955041304Z" level=info msg="CreateContainer within sandbox \"9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 18:07:05.992424 containerd[1584]: time="2025-05-14T18:07:05.992374365Z" level=info msg="Container ad7d3b77a33681eea65f625e64e20d00313412901d32bfd739dade9250a9076d: CDI devices from CRI Config.CDIDevices: []" May 14 18:07:06.026310 containerd[1584]: time="2025-05-14T18:07:06.026258561Z" level=info msg="CreateContainer within sandbox \"9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ad7d3b77a33681eea65f625e64e20d00313412901d32bfd739dade9250a9076d\"" May 14 18:07:06.026911 containerd[1584]: time="2025-05-14T18:07:06.026859473Z" level=info msg="StartContainer for \"ad7d3b77a33681eea65f625e64e20d00313412901d32bfd739dade9250a9076d\"" May 14 18:07:06.028534 containerd[1584]: time="2025-05-14T18:07:06.028486395Z" level=info msg="connecting to shim ad7d3b77a33681eea65f625e64e20d00313412901d32bfd739dade9250a9076d" address="unix:///run/containerd/s/4a460d5cd36a9332890e8fb75f4565f67a3ba3b8e66097f9145f332a5d506168" protocol=ttrpc version=3 May 14 18:07:06.055002 systemd[1]: Started cri-containerd-ad7d3b77a33681eea65f625e64e20d00313412901d32bfd739dade9250a9076d.scope - libcontainer container ad7d3b77a33681eea65f625e64e20d00313412901d32bfd739dade9250a9076d. May 14 18:07:06.101215 containerd[1584]: time="2025-05-14T18:07:06.101108780Z" level=info msg="StartContainer for \"ad7d3b77a33681eea65f625e64e20d00313412901d32bfd739dade9250a9076d\" returns successfully" May 14 18:07:06.589045 systemd-networkd[1490]: cali50f314c037c: Gained IPv6LL May 14 18:07:06.653013 systemd-networkd[1490]: calie2187b373f8: Gained IPv6LL May 14 18:07:06.672175 containerd[1584]: time="2025-05-14T18:07:06.672139268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77698d8d79-b9jcf,Uid:530563e4-43c9-4459-9d56-5d783bccbb99,Namespace:calico-apiserver,Attempt:0,}" May 14 18:07:06.755235 systemd-networkd[1490]: caliaacf3b59fc3: Link UP May 14 18:07:06.755612 systemd-networkd[1490]: caliaacf3b59fc3: Gained carrier May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.700 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0 calico-apiserver-77698d8d79- calico-apiserver 530563e4-43c9-4459-9d56-5d783bccbb99 697 0 2025-05-14 18:06:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77698d8d79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77698d8d79-b9jcf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaacf3b59fc3 [] []}} ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-b9jcf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.700 [INFO][4437] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-b9jcf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.723 [INFO][4451] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" HandleID="k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Workload="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.729 [INFO][4451] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" HandleID="k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Workload="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001330f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77698d8d79-b9jcf", "timestamp":"2025-05-14 18:07:06.723018009 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.729 [INFO][4451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.729 [INFO][4451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.729 [INFO][4451] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.731 [INFO][4451] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.735 [INFO][4451] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.738 [INFO][4451] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.739 [INFO][4451] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.741 [INFO][4451] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.741 [INFO][4451] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.742 [INFO][4451] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998 May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.746 [INFO][4451] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.750 [INFO][4451] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.750 [INFO][4451] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" host="localhost" May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.751 [INFO][4451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:07:06.767882 containerd[1584]: 2025-05-14 18:07:06.751 [INFO][4451] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" HandleID="k8s-pod-network.158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Workload="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" May 14 18:07:06.768987 containerd[1584]: 2025-05-14 18:07:06.753 [INFO][4437] cni-plugin/k8s.go 386: Populated endpoint ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-b9jcf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0", GenerateName:"calico-apiserver-77698d8d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"530563e4-43c9-4459-9d56-5d783bccbb99", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77698d8d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77698d8d79-b9jcf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaacf3b59fc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:06.768987 containerd[1584]: 2025-05-14 18:07:06.753 [INFO][4437] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-b9jcf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" May 14 18:07:06.768987 containerd[1584]: 2025-05-14 18:07:06.753 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaacf3b59fc3 ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-b9jcf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" May 14 18:07:06.768987 containerd[1584]: 2025-05-14 18:07:06.755 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-b9jcf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" May 14 18:07:06.768987 containerd[1584]: 2025-05-14 18:07:06.755 [INFO][4437] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-b9jcf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0", GenerateName:"calico-apiserver-77698d8d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"530563e4-43c9-4459-9d56-5d783bccbb99", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77698d8d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998", Pod:"calico-apiserver-77698d8d79-b9jcf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaacf3b59fc3", MAC:"9a:0d:b9:bf:c9:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:06.768987 containerd[1584]: 2025-05-14 18:07:06.762 [INFO][4437] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" Namespace="calico-apiserver" Pod="calico-apiserver-77698d8d79-b9jcf" WorkloadEndpoint="localhost-k8s-calico--apiserver--77698d8d79--b9jcf-eth0" May 14 18:07:06.793279 containerd[1584]: time="2025-05-14T18:07:06.793172920Z" level=info msg="connecting to shim 158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998" address="unix:///run/containerd/s/7218a3f4bc7b426e8209a068ac9d090399f60de6d97b170dc49061d0f7d8f9a0" namespace=k8s.io protocol=ttrpc version=3 May 14 18:07:06.819968 systemd[1]: Started cri-containerd-158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998.scope - libcontainer container 158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998. May 14 18:07:06.831622 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:07:06.860358 containerd[1584]: time="2025-05-14T18:07:06.860250209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77698d8d79-b9jcf,Uid:530563e4-43c9-4459-9d56-5d783bccbb99,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998\"" May 14 18:07:07.038044 systemd-networkd[1490]: cali672c5f6232e: Gained IPv6LL May 14 18:07:07.672156 containerd[1584]: time="2025-05-14T18:07:07.672100360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57664f55bc-5xsfb,Uid:12e75302-7522-48c2-b49c-260937f5c2a2,Namespace:calico-system,Attempt:0,}" May 14 18:07:08.375804 systemd-networkd[1490]: calibeacf36f655: Link UP May 14 18:07:08.376042 systemd-networkd[1490]: calibeacf36f655: Gained carrier May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.104 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0 calico-kube-controllers-57664f55bc- calico-system 12e75302-7522-48c2-b49c-260937f5c2a2 699 0 2025-05-14 18:06:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57664f55bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-57664f55bc-5xsfb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibeacf36f655 [] []}} ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Namespace="calico-system" Pod="calico-kube-controllers-57664f55bc-5xsfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.104 [INFO][4522] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Namespace="calico-system" Pod="calico-kube-controllers-57664f55bc-5xsfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.135 [INFO][4536] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" HandleID="k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Workload="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.165 [INFO][4536] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" HandleID="k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Workload="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000274b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-57664f55bc-5xsfb", "timestamp":"2025-05-14 18:07:08.135193292 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.165 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.165 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.165 [INFO][4536] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.167 [INFO][4536] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.171 [INFO][4536] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.174 [INFO][4536] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.176 [INFO][4536] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.178 [INFO][4536] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.179 [INFO][4536] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.180 [INFO][4536] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.231 [INFO][4536] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.370 [INFO][4536] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.370 [INFO][4536] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" host="localhost" May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.370 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:07:08.495979 containerd[1584]: 2025-05-14 18:07:08.370 [INFO][4536] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" HandleID="k8s-pod-network.4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Workload="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" May 14 18:07:08.496580 containerd[1584]: 2025-05-14 18:07:08.373 [INFO][4522] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Namespace="calico-system" Pod="calico-kube-controllers-57664f55bc-5xsfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0", GenerateName:"calico-kube-controllers-57664f55bc-", Namespace:"calico-system", SelfLink:"", UID:"12e75302-7522-48c2-b49c-260937f5c2a2", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57664f55bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-57664f55bc-5xsfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibeacf36f655", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:08.496580 containerd[1584]: 2025-05-14 18:07:08.373 [INFO][4522] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Namespace="calico-system" Pod="calico-kube-controllers-57664f55bc-5xsfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" May 14 18:07:08.496580 containerd[1584]: 2025-05-14 18:07:08.373 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeacf36f655 ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Namespace="calico-system" Pod="calico-kube-controllers-57664f55bc-5xsfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" May 14 18:07:08.496580 containerd[1584]: 2025-05-14 18:07:08.376 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Namespace="calico-system" Pod="calico-kube-controllers-57664f55bc-5xsfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" May 14 18:07:08.496580 containerd[1584]: 2025-05-14 18:07:08.376 [INFO][4522] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Namespace="calico-system" Pod="calico-kube-controllers-57664f55bc-5xsfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0", GenerateName:"calico-kube-controllers-57664f55bc-", Namespace:"calico-system", SelfLink:"", UID:"12e75302-7522-48c2-b49c-260937f5c2a2", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57664f55bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a", Pod:"calico-kube-controllers-57664f55bc-5xsfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibeacf36f655", MAC:"0a:92:6c:ca:9d:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:07:08.496580 containerd[1584]: 2025-05-14 18:07:08.490 [INFO][4522] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" Namespace="calico-system" Pod="calico-kube-controllers-57664f55bc-5xsfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57664f55bc--5xsfb-eth0" May 14 18:07:08.509113 systemd-networkd[1490]: caliaacf3b59fc3: Gained IPv6LL May 14 18:07:08.525034 containerd[1584]: time="2025-05-14T18:07:08.524426857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:08.525822 containerd[1584]: time="2025-05-14T18:07:08.525783737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 18:07:08.526575 containerd[1584]: time="2025-05-14T18:07:08.526542880Z" level=info msg="connecting to shim 4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a" address="unix:///run/containerd/s/af3223e5a5cae95bb69ea0d71bd879a80301eb6c3020f926fd286f77f1a00286" namespace=k8s.io protocol=ttrpc version=3 May 14 18:07:08.527379 containerd[1584]: time="2025-05-14T18:07:08.527336978Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:08.529681 containerd[1584]: time="2025-05-14T18:07:08.529654211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:08.530414 containerd[1584]: time="2025-05-14T18:07:08.530371033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.576642954s" May 14 18:07:08.530459 containerd[1584]: time="2025-05-14T18:07:08.530416529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 18:07:08.531922 containerd[1584]: time="2025-05-14T18:07:08.531683569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 18:07:08.533822 containerd[1584]: time="2025-05-14T18:07:08.533772181Z" level=info msg="CreateContainer within sandbox \"a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:07:08.554164 containerd[1584]: time="2025-05-14T18:07:08.554020196Z" level=info msg="Container e79d4da13d52a9db5be86ceece787f2e77f75d7628bfd6e178458b9faa8f1dc1: CDI devices from CRI Config.CDIDevices: []" May 14 18:07:08.562395 containerd[1584]: time="2025-05-14T18:07:08.562343801Z" level=info msg="CreateContainer within sandbox \"a9acbfb51652a90cf5b7d30a5f909c1fe15f6639b4e027eaa818b035cf2357a6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e79d4da13d52a9db5be86ceece787f2e77f75d7628bfd6e178458b9faa8f1dc1\"" May 14 18:07:08.563075 systemd[1]: Started cri-containerd-4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a.scope - libcontainer container 4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a. May 14 18:07:08.565066 containerd[1584]: time="2025-05-14T18:07:08.563064892Z" level=info msg="StartContainer for \"e79d4da13d52a9db5be86ceece787f2e77f75d7628bfd6e178458b9faa8f1dc1\"" May 14 18:07:08.565066 containerd[1584]: time="2025-05-14T18:07:08.564047977Z" level=info msg="connecting to shim e79d4da13d52a9db5be86ceece787f2e77f75d7628bfd6e178458b9faa8f1dc1" address="unix:///run/containerd/s/9b800791adcb2eca2401bc3c47ebc1690182791e25e96c96370e2a0ee4e8b4ad" protocol=ttrpc version=3 May 14 18:07:08.576873 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:07:08.590107 systemd[1]: Started cri-containerd-e79d4da13d52a9db5be86ceece787f2e77f75d7628bfd6e178458b9faa8f1dc1.scope - libcontainer container e79d4da13d52a9db5be86ceece787f2e77f75d7628bfd6e178458b9faa8f1dc1. May 14 18:07:08.613433 containerd[1584]: time="2025-05-14T18:07:08.613386974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57664f55bc-5xsfb,Uid:12e75302-7522-48c2-b49c-260937f5c2a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a\"" May 14 18:07:08.752779 containerd[1584]: time="2025-05-14T18:07:08.752723023Z" level=info msg="StartContainer for \"e79d4da13d52a9db5be86ceece787f2e77f75d7628bfd6e178458b9faa8f1dc1\" returns successfully" May 14 18:07:08.900499 kubelet[2683]: I0514 18:07:08.900416 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77698d8d79-wxx2d" podStartSLOduration=25.312426141 podStartE2EDuration="28.900400462s" podCreationTimestamp="2025-05-14 18:06:40 +0000 UTC" firstStartedPulling="2025-05-14 18:07:04.943454978 +0000 UTC m=+35.360491632" lastFinishedPulling="2025-05-14 18:07:08.531429299 +0000 UTC m=+38.948465953" observedRunningTime="2025-05-14 18:07:08.8999228 +0000 UTC m=+39.316959454" watchObservedRunningTime="2025-05-14 18:07:08.900400462 +0000 UTC m=+39.317437106" May 14 18:07:09.009254 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:46622.service - OpenSSH per-connection server daemon (10.0.0.1:46622). May 14 18:07:09.182548 sshd[4644]: Accepted publickey for core from 10.0.0.1 port 46622 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:09.184417 sshd-session[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:09.189370 systemd-logind[1566]: New session 10 of user core. May 14 18:07:09.198011 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:07:09.352157 sshd[4646]: Connection closed by 10.0.0.1 port 46622 May 14 18:07:09.352543 sshd-session[4644]: pam_unix(sshd:session): session closed for user core May 14 18:07:09.357567 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:46622.service: Deactivated successfully. May 14 18:07:09.360031 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:07:09.361064 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. May 14 18:07:09.363377 systemd-logind[1566]: Removed session 10. May 14 18:07:09.661152 systemd-networkd[1490]: calibeacf36f655: Gained IPv6LL May 14 18:07:10.611827 containerd[1584]: time="2025-05-14T18:07:10.611776644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:10.612634 containerd[1584]: time="2025-05-14T18:07:10.612570812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 18:07:10.613949 containerd[1584]: time="2025-05-14T18:07:10.613918775Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:10.632715 containerd[1584]: time="2025-05-14T18:07:10.632647652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:10.633347 containerd[1584]: time="2025-05-14T18:07:10.633301716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.101580314s" May 14 18:07:10.633399 containerd[1584]: time="2025-05-14T18:07:10.633349356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 18:07:10.634383 containerd[1584]: time="2025-05-14T18:07:10.634355374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:07:10.635256 containerd[1584]: time="2025-05-14T18:07:10.635226867Z" level=info msg="CreateContainer within sandbox \"9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 18:07:10.650247 containerd[1584]: time="2025-05-14T18:07:10.650198808Z" level=info msg="Container c6af480cc90c6c7948e93ddf11717b9982d90878624825991ac324472b8afe62: CDI devices from CRI Config.CDIDevices: []" May 14 18:07:10.658751 containerd[1584]: time="2025-05-14T18:07:10.658705845Z" level=info msg="CreateContainer within sandbox \"9ed2d026f95b58be76af8faa2cbc00402c898445dc3f164a9b0acac398f77559\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c6af480cc90c6c7948e93ddf11717b9982d90878624825991ac324472b8afe62\"" May 14 18:07:10.659263 containerd[1584]: time="2025-05-14T18:07:10.659212882Z" level=info msg="StartContainer for \"c6af480cc90c6c7948e93ddf11717b9982d90878624825991ac324472b8afe62\"" May 14 18:07:10.660777 containerd[1584]: time="2025-05-14T18:07:10.660746464Z" level=info msg="connecting to shim c6af480cc90c6c7948e93ddf11717b9982d90878624825991ac324472b8afe62" address="unix:///run/containerd/s/4a460d5cd36a9332890e8fb75f4565f67a3ba3b8e66097f9145f332a5d506168" protocol=ttrpc version=3 May 14 18:07:10.685229 systemd[1]: Started cri-containerd-c6af480cc90c6c7948e93ddf11717b9982d90878624825991ac324472b8afe62.scope - libcontainer container c6af480cc90c6c7948e93ddf11717b9982d90878624825991ac324472b8afe62. May 14 18:07:11.320536 containerd[1584]: time="2025-05-14T18:07:11.320478351Z" level=info msg="StartContainer for \"c6af480cc90c6c7948e93ddf11717b9982d90878624825991ac324472b8afe62\" returns successfully" May 14 18:07:11.355354 containerd[1584]: time="2025-05-14T18:07:11.355302829Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:11.356452 containerd[1584]: time="2025-05-14T18:07:11.356391413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 18:07:11.358715 containerd[1584]: time="2025-05-14T18:07:11.358660372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 724.272727ms" May 14 18:07:11.358715 containerd[1584]: time="2025-05-14T18:07:11.358712550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 18:07:11.360035 containerd[1584]: time="2025-05-14T18:07:11.360002754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 18:07:11.361337 containerd[1584]: time="2025-05-14T18:07:11.361284681Z" level=info msg="CreateContainer within sandbox \"158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:07:11.370629 containerd[1584]: time="2025-05-14T18:07:11.370588547Z" level=info msg="Container 76229aac66eb7a0a998ac648b63d661de94974d61edba9cf447a92a723fb93c3: CDI devices from CRI Config.CDIDevices: []" May 14 18:07:11.378199 containerd[1584]: time="2025-05-14T18:07:11.378157613Z" level=info msg="CreateContainer within sandbox \"158b1e9b09246962543ac3001ac99b1af86d4c2319ce9b1c2430b2ee4b316998\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"76229aac66eb7a0a998ac648b63d661de94974d61edba9cf447a92a723fb93c3\"" May 14 18:07:11.378724 containerd[1584]: time="2025-05-14T18:07:11.378656433Z" level=info msg="StartContainer for \"76229aac66eb7a0a998ac648b63d661de94974d61edba9cf447a92a723fb93c3\"" May 14 18:07:11.379831 containerd[1584]: time="2025-05-14T18:07:11.379773641Z" level=info msg="connecting to shim 76229aac66eb7a0a998ac648b63d661de94974d61edba9cf447a92a723fb93c3" address="unix:///run/containerd/s/7218a3f4bc7b426e8209a068ac9d090399f60de6d97b170dc49061d0f7d8f9a0" protocol=ttrpc version=3 May 14 18:07:11.403013 systemd[1]: Started cri-containerd-76229aac66eb7a0a998ac648b63d661de94974d61edba9cf447a92a723fb93c3.scope - libcontainer container 76229aac66eb7a0a998ac648b63d661de94974d61edba9cf447a92a723fb93c3. May 14 18:07:11.503416 containerd[1584]: time="2025-05-14T18:07:11.503373087Z" level=info msg="StartContainer for \"76229aac66eb7a0a998ac648b63d661de94974d61edba9cf447a92a723fb93c3\" returns successfully" May 14 18:07:11.737266 kubelet[2683]: I0514 18:07:11.737223 2683 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 18:07:11.737266 kubelet[2683]: I0514 18:07:11.737263 2683 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 18:07:12.342262 kubelet[2683]: I0514 18:07:12.341971 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77698d8d79-b9jcf" podStartSLOduration=27.844491513 podStartE2EDuration="32.341954044s" podCreationTimestamp="2025-05-14 18:06:40 +0000 UTC" firstStartedPulling="2025-05-14 18:07:06.862198097 +0000 UTC m=+37.279234751" lastFinishedPulling="2025-05-14 18:07:11.359660628 +0000 UTC m=+41.776697282" observedRunningTime="2025-05-14 18:07:12.340711149 +0000 UTC m=+42.757747803" watchObservedRunningTime="2025-05-14 18:07:12.341954044 +0000 UTC m=+42.758990698" May 14 18:07:12.357187 kubelet[2683]: I0514 18:07:12.357115 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l9xcz" podStartSLOduration=26.256849368 podStartE2EDuration="32.357097119s" podCreationTimestamp="2025-05-14 18:06:40 +0000 UTC" firstStartedPulling="2025-05-14 18:07:04.533911473 +0000 UTC m=+34.950948127" lastFinishedPulling="2025-05-14 18:07:10.634159224 +0000 UTC m=+41.051195878" observedRunningTime="2025-05-14 18:07:12.353286413 +0000 UTC m=+42.770323067" watchObservedRunningTime="2025-05-14 18:07:12.357097119 +0000 UTC m=+42.774133773" May 14 18:07:12.953064 containerd[1584]: time="2025-05-14T18:07:12.953016871Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098\" id:\"7321fa1ce48bdf3e48675f2b270bebde0efa481ae6b0649043e77870c54730c4\" pid:4767 exited_at:{seconds:1747246032 nanos:952574588}" May 14 18:07:13.143201 containerd[1584]: time="2025-05-14T18:07:13.143144685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:13.143838 containerd[1584]: time="2025-05-14T18:07:13.143797858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 18:07:13.144917 containerd[1584]: time="2025-05-14T18:07:13.144869157Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:13.146666 containerd[1584]: time="2025-05-14T18:07:13.146625088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:07:13.147210 containerd[1584]: time="2025-05-14T18:07:13.147186737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 1.787016589s" May 14 18:07:13.147280 containerd[1584]: time="2025-05-14T18:07:13.147212465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 18:07:13.156120 containerd[1584]: time="2025-05-14T18:07:13.156078702Z" level=info msg="CreateContainer within sandbox \"4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 18:07:13.165380 containerd[1584]: time="2025-05-14T18:07:13.165350433Z" level=info msg="Container 71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873: CDI devices from CRI Config.CDIDevices: []" May 14 18:07:13.174234 containerd[1584]: time="2025-05-14T18:07:13.174190640Z" level=info msg="CreateContainer within sandbox \"4126cfcb25ea482b5c05dc3642d3158f0f8de2f9d56fe87b74a391754e1f5e3a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873\"" May 14 18:07:13.174729 containerd[1584]: time="2025-05-14T18:07:13.174698638Z" level=info msg="StartContainer for \"71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873\"" May 14 18:07:13.175667 containerd[1584]: time="2025-05-14T18:07:13.175643580Z" level=info msg="connecting to shim 71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873" address="unix:///run/containerd/s/af3223e5a5cae95bb69ea0d71bd879a80301eb6c3020f926fd286f77f1a00286" protocol=ttrpc version=3 May 14 18:07:13.194980 systemd[1]: Started cri-containerd-71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873.scope - libcontainer container 71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873. May 14 18:07:13.245696 containerd[1584]: time="2025-05-14T18:07:13.245305622Z" level=info msg="StartContainer for \"71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873\" returns successfully" May 14 18:07:13.339951 kubelet[2683]: I0514 18:07:13.339891 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57664f55bc-5xsfb" podStartSLOduration=28.806767821 podStartE2EDuration="33.339874044s" podCreationTimestamp="2025-05-14 18:06:40 +0000 UTC" firstStartedPulling="2025-05-14 18:07:08.61496969 +0000 UTC m=+39.032006344" lastFinishedPulling="2025-05-14 18:07:13.148075913 +0000 UTC m=+43.565112567" observedRunningTime="2025-05-14 18:07:13.339069547 +0000 UTC m=+43.756106201" watchObservedRunningTime="2025-05-14 18:07:13.339874044 +0000 UTC m=+43.756910688" May 14 18:07:14.361931 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:33698.service - OpenSSH per-connection server daemon (10.0.0.1:33698). May 14 18:07:14.378225 containerd[1584]: time="2025-05-14T18:07:14.378181835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873\" id:\"00e2cb9456bd95f94b4e5bbdfa3f4ab23aef7e2c2456a56b2151e9cc9e28ab0c\" pid:4828 exited_at:{seconds:1747246034 nanos:377750952}" May 14 18:07:14.425907 sshd[4835]: Accepted publickey for core from 10.0.0.1 port 33698 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:14.427201 sshd-session[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:14.434231 systemd-logind[1566]: New session 11 of user core. May 14 18:07:14.444011 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:07:14.566132 sshd[4841]: Connection closed by 10.0.0.1 port 33698 May 14 18:07:14.566570 sshd-session[4835]: pam_unix(sshd:session): session closed for user core May 14 18:07:14.575874 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:33698.service: Deactivated successfully. May 14 18:07:14.578364 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:07:14.579889 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. May 14 18:07:14.583610 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:33710.service - OpenSSH per-connection server daemon (10.0.0.1:33710). May 14 18:07:14.584963 systemd-logind[1566]: Removed session 11. May 14 18:07:14.634328 sshd[4855]: Accepted publickey for core from 10.0.0.1 port 33710 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:14.636082 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:14.640667 systemd-logind[1566]: New session 12 of user core. May 14 18:07:14.647982 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:07:14.813755 sshd[4857]: Connection closed by 10.0.0.1 port 33710 May 14 18:07:14.814091 sshd-session[4855]: pam_unix(sshd:session): session closed for user core May 14 18:07:14.823944 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:33710.service: Deactivated successfully. May 14 18:07:14.826317 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:07:14.828256 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. May 14 18:07:14.834062 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:33720.service - OpenSSH per-connection server daemon (10.0.0.1:33720). May 14 18:07:14.835724 systemd-logind[1566]: Removed session 12. May 14 18:07:14.881734 sshd[4869]: Accepted publickey for core from 10.0.0.1 port 33720 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:14.883733 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:14.888834 systemd-logind[1566]: New session 13 of user core. May 14 18:07:14.901982 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:07:15.014811 sshd[4871]: Connection closed by 10.0.0.1 port 33720 May 14 18:07:15.015120 sshd-session[4869]: pam_unix(sshd:session): session closed for user core May 14 18:07:15.019363 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:33720.service: Deactivated successfully. May 14 18:07:15.021434 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:07:15.022254 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. May 14 18:07:15.023545 systemd-logind[1566]: Removed session 13. May 14 18:07:20.032092 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:33724.service - OpenSSH per-connection server daemon (10.0.0.1:33724). May 14 18:07:20.084758 sshd[4892]: Accepted publickey for core from 10.0.0.1 port 33724 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:20.086565 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:20.092828 systemd-logind[1566]: New session 14 of user core. May 14 18:07:20.098023 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:07:20.235824 sshd[4898]: Connection closed by 10.0.0.1 port 33724 May 14 18:07:20.236150 sshd-session[4892]: pam_unix(sshd:session): session closed for user core May 14 18:07:20.240735 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:33724.service: Deactivated successfully. May 14 18:07:20.243034 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:07:20.244012 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. May 14 18:07:20.245470 systemd-logind[1566]: Removed session 14. May 14 18:07:25.249170 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:45454.service - OpenSSH per-connection server daemon (10.0.0.1:45454). May 14 18:07:25.298703 sshd[4911]: Accepted publickey for core from 10.0.0.1 port 45454 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:25.300293 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:25.305001 systemd-logind[1566]: New session 15 of user core. May 14 18:07:25.316148 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:07:25.442987 sshd[4913]: Connection closed by 10.0.0.1 port 45454 May 14 18:07:25.443326 sshd-session[4911]: pam_unix(sshd:session): session closed for user core May 14 18:07:25.447648 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:45454.service: Deactivated successfully. May 14 18:07:25.449771 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:07:25.450645 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. May 14 18:07:25.451761 systemd-logind[1566]: Removed session 15. May 14 18:07:26.422928 containerd[1584]: time="2025-05-14T18:07:26.422882050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873\" id:\"fee21ac394a62a526fc8f6d9479e32bffae3be97fb6e6fea877abc96c76424ad\" pid:4937 exited_at:{seconds:1747246046 nanos:422352042}" May 14 18:07:30.459027 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:45466.service - OpenSSH per-connection server daemon (10.0.0.1:45466). May 14 18:07:30.511328 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 45466 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:30.513106 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:30.518244 systemd-logind[1566]: New session 16 of user core. May 14 18:07:30.525029 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:07:30.646547 sshd[4954]: Connection closed by 10.0.0.1 port 45466 May 14 18:07:30.646903 sshd-session[4952]: pam_unix(sshd:session): session closed for user core May 14 18:07:30.651267 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:45466.service: Deactivated successfully. May 14 18:07:30.653823 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:07:30.654727 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. May 14 18:07:30.656623 systemd-logind[1566]: Removed session 16. May 14 18:07:35.660113 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:60192.service - OpenSSH per-connection server daemon (10.0.0.1:60192). May 14 18:07:35.721958 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 60192 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:35.723884 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:35.730973 systemd-logind[1566]: New session 17 of user core. May 14 18:07:35.734030 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:07:35.864237 sshd[4974]: Connection closed by 10.0.0.1 port 60192 May 14 18:07:35.864541 sshd-session[4972]: pam_unix(sshd:session): session closed for user core May 14 18:07:35.877054 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:60192.service: Deactivated successfully. May 14 18:07:35.879347 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:07:35.880157 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. May 14 18:07:35.883229 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:60208.service - OpenSSH per-connection server daemon (10.0.0.1:60208). May 14 18:07:35.884103 systemd-logind[1566]: Removed session 17. May 14 18:07:35.933487 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 60208 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:35.935106 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:35.939258 systemd-logind[1566]: New session 18 of user core. May 14 18:07:35.949959 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:07:36.270394 sshd[4989]: Connection closed by 10.0.0.1 port 60208 May 14 18:07:36.270823 sshd-session[4987]: pam_unix(sshd:session): session closed for user core May 14 18:07:36.286856 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:60208.service: Deactivated successfully. May 14 18:07:36.288990 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:07:36.289751 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. May 14 18:07:36.293389 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:60222.service - OpenSSH per-connection server daemon (10.0.0.1:60222). May 14 18:07:36.294075 systemd-logind[1566]: Removed session 18. May 14 18:07:36.351994 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 60222 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:36.353301 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:36.357639 systemd-logind[1566]: New session 19 of user core. May 14 18:07:36.366027 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:07:37.901932 sshd[5002]: Connection closed by 10.0.0.1 port 60222 May 14 18:07:37.902315 sshd-session[5000]: pam_unix(sshd:session): session closed for user core May 14 18:07:37.918910 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:60222.service: Deactivated successfully. May 14 18:07:37.922097 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:07:37.923805 systemd[1]: session-19.scope: Consumed 571ms CPU time, 68.5M memory peak. May 14 18:07:37.925454 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. May 14 18:07:37.930669 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:60224.service - OpenSSH per-connection server daemon (10.0.0.1:60224). May 14 18:07:37.932148 systemd-logind[1566]: Removed session 19. May 14 18:07:37.984312 sshd[5022]: Accepted publickey for core from 10.0.0.1 port 60224 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:37.985806 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:37.990438 systemd-logind[1566]: New session 20 of user core. May 14 18:07:38.000983 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:07:38.217161 sshd[5024]: Connection closed by 10.0.0.1 port 60224 May 14 18:07:38.217651 sshd-session[5022]: pam_unix(sshd:session): session closed for user core May 14 18:07:38.234753 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:60224.service: Deactivated successfully. May 14 18:07:38.236839 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:07:38.237657 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. May 14 18:07:38.240741 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:60238.service - OpenSSH per-connection server daemon (10.0.0.1:60238). May 14 18:07:38.241378 systemd-logind[1566]: Removed session 20. May 14 18:07:38.292907 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 60238 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:38.294291 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:38.298598 systemd-logind[1566]: New session 21 of user core. May 14 18:07:38.307971 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:07:38.444199 sshd[5037]: Connection closed by 10.0.0.1 port 60238 May 14 18:07:38.444494 sshd-session[5035]: pam_unix(sshd:session): session closed for user core May 14 18:07:38.448015 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:60238.service: Deactivated successfully. May 14 18:07:38.450281 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:07:38.453127 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. May 14 18:07:38.454183 systemd-logind[1566]: Removed session 21. May 14 18:07:42.972390 containerd[1584]: time="2025-05-14T18:07:42.972342493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de247f3109eef990a00d5fe9c42bcdec144cb46756a6050e2bfa3697dd188098\" id:\"4e67ca33d3abb6faac8fb50deae3a016163f1e010fe57a7f51d97d86e09d2bc1\" pid:5068 exited_at:{seconds:1747246062 nanos:972044623}" May 14 18:07:43.460454 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:60248.service - OpenSSH per-connection server daemon (10.0.0.1:60248). May 14 18:07:43.512282 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 60248 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:43.513814 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:43.518038 systemd-logind[1566]: New session 22 of user core. May 14 18:07:43.525971 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:07:43.653630 sshd[5085]: Connection closed by 10.0.0.1 port 60248 May 14 18:07:43.653943 sshd-session[5083]: pam_unix(sshd:session): session closed for user core May 14 18:07:43.659336 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:60248.service: Deactivated successfully. May 14 18:07:43.661609 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:07:43.662503 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. May 14 18:07:43.663977 systemd-logind[1566]: Removed session 22. May 14 18:07:48.670576 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:54440.service - OpenSSH per-connection server daemon (10.0.0.1:54440). May 14 18:07:48.732731 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 54440 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:48.734061 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:48.738290 systemd-logind[1566]: New session 23 of user core. May 14 18:07:48.752953 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:07:48.861279 sshd[5105]: Connection closed by 10.0.0.1 port 54440 May 14 18:07:48.861583 sshd-session[5103]: pam_unix(sshd:session): session closed for user core May 14 18:07:48.866150 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:54440.service: Deactivated successfully. May 14 18:07:48.868371 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:07:48.869250 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. May 14 18:07:48.870710 systemd-logind[1566]: Removed session 23. May 14 18:07:53.873749 systemd[1]: Started sshd@23-10.0.0.82:22-10.0.0.1:53570.service - OpenSSH per-connection server daemon (10.0.0.1:53570). May 14 18:07:53.929818 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 53570 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:53.931286 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:53.935226 systemd-logind[1566]: New session 24 of user core. May 14 18:07:53.944963 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:07:54.051485 sshd[5120]: Connection closed by 10.0.0.1 port 53570 May 14 18:07:54.051860 sshd-session[5118]: pam_unix(sshd:session): session closed for user core May 14 18:07:54.054690 systemd[1]: sshd@23-10.0.0.82:22-10.0.0.1:53570.service: Deactivated successfully. May 14 18:07:54.056617 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:07:54.058051 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. May 14 18:07:54.059378 systemd-logind[1566]: Removed session 24. May 14 18:07:56.423760 containerd[1584]: time="2025-05-14T18:07:56.423716371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f9dc6d42cf2fb7f77d9105127cf96ce1e993732daecaed41d2993d6b3d7873\" id:\"6c34a9eab5212438c79323988dbbfcde332440fea735f9bb92cd26eeda2f6f69\" pid:5144 exited_at:{seconds:1747246076 nanos:423549657}" May 14 18:07:59.072377 systemd[1]: Started sshd@24-10.0.0.82:22-10.0.0.1:53572.service - OpenSSH per-connection server daemon (10.0.0.1:53572). May 14 18:07:59.139323 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 53572 ssh2: RSA SHA256:29vqBBH9azFCifOLq9MlGVIHcdc45UJsdh7YoX9ptPA May 14 18:07:59.140603 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:59.144875 systemd-logind[1566]: New session 25 of user core. May 14 18:07:59.152971 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:07:59.275611 sshd[5158]: Connection closed by 10.0.0.1 port 53572 May 14 18:07:59.275975 sshd-session[5155]: pam_unix(sshd:session): session closed for user core May 14 18:07:59.280682 systemd[1]: sshd@24-10.0.0.82:22-10.0.0.1:53572.service: Deactivated successfully. May 14 18:07:59.282760 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:07:59.283588 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. May 14 18:07:59.285189 systemd-logind[1566]: Removed session 25.