May 12 13:09:35.810775 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 11:32:17 -00 2025 May 12 13:09:35.810798 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62c60c65dace52c1f99bbb1aca7fdf1720d975c5bba458cbf5745d5f40d8ca41 May 12 13:09:35.810807 kernel: BIOS-provided physical RAM map: May 12 13:09:35.810814 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 12 13:09:35.810820 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 12 13:09:35.810827 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 12 13:09:35.810834 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 12 13:09:35.810842 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 12 13:09:35.810849 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 12 13:09:35.810855 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 12 13:09:35.810862 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 12 13:09:35.810868 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 12 13:09:35.810875 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 12 13:09:35.810881 kernel: NX (Execute Disable) protection: active May 12 13:09:35.810891 kernel: APIC: Static calls initialized May 12 13:09:35.810898 kernel: SMBIOS 2.8 present. May 12 13:09:35.810905 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 12 13:09:35.810912 kernel: DMI: Memory slots populated: 1/1 May 12 13:09:35.810919 kernel: Hypervisor detected: KVM May 12 13:09:35.810926 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 12 13:09:35.810933 kernel: kvm-clock: using sched offset of 3214826965 cycles May 12 13:09:35.810941 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 12 13:09:35.810948 kernel: tsc: Detected 2794.748 MHz processor May 12 13:09:35.810955 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 12 13:09:35.810965 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 12 13:09:35.810972 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 12 13:09:35.810980 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 12 13:09:35.810987 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 12 13:09:35.810994 kernel: Using GB pages for direct mapping May 12 13:09:35.811001 kernel: ACPI: Early table checksum verification disabled May 12 13:09:35.811008 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 12 13:09:35.811016 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:09:35.811025 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:09:35.811033 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:09:35.811040 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 12 13:09:35.811047 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:09:35.811055 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:09:35.811074 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:09:35.811081 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:09:35.811099 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 12 13:09:35.811123 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 12 13:09:35.811138 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 12 13:09:35.811149 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 12 13:09:35.811157 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 12 13:09:35.811164 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 12 13:09:35.811179 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 12 13:09:35.811189 kernel: No NUMA configuration found May 12 13:09:35.811196 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 12 13:09:35.811204 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 12 13:09:35.811211 kernel: Zone ranges: May 12 13:09:35.811219 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 12 13:09:35.811226 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 12 13:09:35.811234 kernel: Normal empty May 12 13:09:35.811241 kernel: Device empty May 12 13:09:35.811248 kernel: Movable zone start for each node May 12 13:09:35.811256 kernel: Early memory node ranges May 12 13:09:35.811265 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 12 13:09:35.811273 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 12 13:09:35.811280 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 12 13:09:35.811287 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 12 13:09:35.811295 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 12 13:09:35.811302 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 12 13:09:35.811310 kernel: ACPI: PM-Timer IO Port: 0x608 May 12 13:09:35.811317 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 12 13:09:35.811324 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 12 13:09:35.811334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 12 13:09:35.811341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 12 13:09:35.811349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 12 13:09:35.811356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 12 13:09:35.811363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 12 13:09:35.811371 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 12 13:09:35.811378 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 12 13:09:35.811385 kernel: TSC deadline timer available May 12 13:09:35.811401 kernel: CPU topo: Max. logical packages: 1 May 12 13:09:35.811418 kernel: CPU topo: Max. logical dies: 1 May 12 13:09:35.811427 kernel: CPU topo: Max. dies per package: 1 May 12 13:09:35.811434 kernel: CPU topo: Max. threads per core: 1 May 12 13:09:35.811441 kernel: CPU topo: Num. cores per package: 4 May 12 13:09:35.811449 kernel: CPU topo: Num. threads per package: 4 May 12 13:09:35.811456 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 12 13:09:35.811463 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 12 13:09:35.811471 kernel: kvm-guest: KVM setup pv remote TLB flush May 12 13:09:35.811478 kernel: kvm-guest: setup PV sched yield May 12 13:09:35.811485 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 12 13:09:35.811495 kernel: Booting paravirtualized kernel on KVM May 12 13:09:35.811503 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 12 13:09:35.811510 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 12 13:09:35.811518 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 12 13:09:35.811525 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 12 13:09:35.811533 kernel: pcpu-alloc: [0] 0 1 2 3 May 12 13:09:35.811540 kernel: kvm-guest: PV spinlocks enabled May 12 13:09:35.811547 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 12 13:09:35.811556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62c60c65dace52c1f99bbb1aca7fdf1720d975c5bba458cbf5745d5f40d8ca41 May 12 13:09:35.811566 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 12 13:09:35.811573 kernel: random: crng init done May 12 13:09:35.811581 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 12 13:09:35.811588 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 12 13:09:35.811595 kernel: Fallback order for Node 0: 0 May 12 13:09:35.811603 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 12 13:09:35.811610 kernel: Policy zone: DMA32 May 12 13:09:35.811618 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 12 13:09:35.811627 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 12 13:09:35.811635 kernel: ftrace: allocating 40065 entries in 157 pages May 12 13:09:35.811642 kernel: ftrace: allocated 157 pages with 5 groups May 12 13:09:35.811650 kernel: Dynamic Preempt: voluntary May 12 13:09:35.811657 kernel: rcu: Preemptible hierarchical RCU implementation. May 12 13:09:35.811665 kernel: rcu: RCU event tracing is enabled. May 12 13:09:35.811672 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 12 13:09:35.811680 kernel: Trampoline variant of Tasks RCU enabled. May 12 13:09:35.811688 kernel: Rude variant of Tasks RCU enabled. May 12 13:09:35.811697 kernel: Tracing variant of Tasks RCU enabled. May 12 13:09:35.811705 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 12 13:09:35.811713 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 12 13:09:35.811720 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 13:09:35.811728 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 13:09:35.811735 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 13:09:35.811743 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 12 13:09:35.811750 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 12 13:09:35.811766 kernel: Console: colour VGA+ 80x25 May 12 13:09:35.811774 kernel: printk: legacy console [ttyS0] enabled May 12 13:09:35.811781 kernel: ACPI: Core revision 20240827 May 12 13:09:35.811789 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 12 13:09:35.811799 kernel: APIC: Switch to symmetric I/O mode setup May 12 13:09:35.811806 kernel: x2apic enabled May 12 13:09:35.811814 kernel: APIC: Switched APIC routing to: physical x2apic May 12 13:09:35.811822 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 12 13:09:35.811830 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 12 13:09:35.811840 kernel: kvm-guest: setup PV IPIs May 12 13:09:35.811848 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 12 13:09:35.811856 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 12 13:09:35.811864 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 12 13:09:35.811871 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 12 13:09:35.811879 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 12 13:09:35.811887 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 12 13:09:35.811895 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 12 13:09:35.811902 kernel: Spectre V2 : Mitigation: Retpolines May 12 13:09:35.811912 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 12 13:09:35.811920 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 12 13:09:35.811928 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 12 13:09:35.811936 kernel: RETBleed: Mitigation: untrained return thunk May 12 13:09:35.811944 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 12 13:09:35.811951 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 12 13:09:35.811959 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 12 13:09:35.811968 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 12 13:09:35.811978 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 12 13:09:35.811986 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 12 13:09:35.811993 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 12 13:09:35.812001 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 12 13:09:35.812009 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 12 13:09:35.812017 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 12 13:09:35.812024 kernel: Freeing SMP alternatives memory: 32K May 12 13:09:35.812032 kernel: pid_max: default: 32768 minimum: 301 May 12 13:09:35.812040 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 12 13:09:35.812049 kernel: landlock: Up and running. May 12 13:09:35.812133 kernel: SELinux: Initializing. May 12 13:09:35.812141 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 13:09:35.812149 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 13:09:35.812157 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 12 13:09:35.812165 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 12 13:09:35.812179 kernel: ... version: 0 May 12 13:09:35.812187 kernel: ... bit width: 48 May 12 13:09:35.812195 kernel: ... generic registers: 6 May 12 13:09:35.812205 kernel: ... value mask: 0000ffffffffffff May 12 13:09:35.812213 kernel: ... max period: 00007fffffffffff May 12 13:09:35.812221 kernel: ... fixed-purpose events: 0 May 12 13:09:35.812228 kernel: ... event mask: 000000000000003f May 12 13:09:35.812236 kernel: signal: max sigframe size: 1776 May 12 13:09:35.812244 kernel: rcu: Hierarchical SRCU implementation. May 12 13:09:35.812251 kernel: rcu: Max phase no-delay instances is 400. May 12 13:09:35.812259 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 12 13:09:35.812267 kernel: smp: Bringing up secondary CPUs ... May 12 13:09:35.812277 kernel: smpboot: x86: Booting SMP configuration: May 12 13:09:35.812284 kernel: .... node #0, CPUs: #1 #2 #3 May 12 13:09:35.812292 kernel: smp: Brought up 1 node, 4 CPUs May 12 13:09:35.812299 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 12 13:09:35.812308 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 136904K reserved, 0K cma-reserved) May 12 13:09:35.812315 kernel: devtmpfs: initialized May 12 13:09:35.812323 kernel: x86/mm: Memory block size: 128MB May 12 13:09:35.812331 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 12 13:09:35.812339 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 12 13:09:35.812349 kernel: pinctrl core: initialized pinctrl subsystem May 12 13:09:35.812356 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 12 13:09:35.812364 kernel: audit: initializing netlink subsys (disabled) May 12 13:09:35.812372 kernel: audit: type=2000 audit(1747055372.339:1): state=initialized audit_enabled=0 res=1 May 12 13:09:35.812380 kernel: thermal_sys: Registered thermal governor 'step_wise' May 12 13:09:35.812387 kernel: thermal_sys: Registered thermal governor 'user_space' May 12 13:09:35.812395 kernel: cpuidle: using governor menu May 12 13:09:35.812403 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 12 13:09:35.812410 kernel: dca service started, version 1.12.1 May 12 13:09:35.812420 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 12 13:09:35.812428 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 12 13:09:35.812436 kernel: PCI: Using configuration type 1 for base access May 12 13:09:35.812444 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 12 13:09:35.812451 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 12 13:09:35.812459 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 12 13:09:35.812467 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 12 13:09:35.812475 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 12 13:09:35.812482 kernel: ACPI: Added _OSI(Module Device) May 12 13:09:35.812492 kernel: ACPI: Added _OSI(Processor Device) May 12 13:09:35.812499 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 12 13:09:35.812507 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 12 13:09:35.812515 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 12 13:09:35.812522 kernel: ACPI: Interpreter enabled May 12 13:09:35.812530 kernel: ACPI: PM: (supports S0 S3 S5) May 12 13:09:35.812546 kernel: ACPI: Using IOAPIC for interrupt routing May 12 13:09:35.812555 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 12 13:09:35.812570 kernel: PCI: Using E820 reservations for host bridge windows May 12 13:09:35.812588 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 12 13:09:35.812596 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 12 13:09:35.812797 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 12 13:09:35.812916 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 12 13:09:35.813028 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 12 13:09:35.813039 kernel: PCI host bridge to bus 0000:00 May 12 13:09:35.813180 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 12 13:09:35.813292 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 12 13:09:35.813398 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 12 13:09:35.813502 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 12 13:09:35.813631 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 12 13:09:35.813736 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 12 13:09:35.813839 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 12 13:09:35.813975 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 12 13:09:35.814122 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 12 13:09:35.814249 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 12 13:09:35.814362 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 12 13:09:35.814475 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 12 13:09:35.814587 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 12 13:09:35.814709 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 12 13:09:35.814827 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 12 13:09:35.814940 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 12 13:09:35.815054 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 12 13:09:35.815205 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 12 13:09:35.815321 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 12 13:09:35.815434 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 12 13:09:35.815548 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 12 13:09:35.815682 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 12 13:09:35.815796 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 12 13:09:35.815910 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 12 13:09:35.816022 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 12 13:09:35.816213 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 12 13:09:35.816337 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 12 13:09:35.816453 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 12 13:09:35.816582 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 12 13:09:35.816695 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 12 13:09:35.816808 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 12 13:09:35.816929 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 12 13:09:35.817043 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 12 13:09:35.817053 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 12 13:09:35.817080 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 12 13:09:35.817088 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 12 13:09:35.817096 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 12 13:09:35.817104 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 12 13:09:35.817112 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 12 13:09:35.817119 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 12 13:09:35.817127 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 12 13:09:35.817135 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 12 13:09:35.817143 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 12 13:09:35.817153 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 12 13:09:35.817161 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 12 13:09:35.817168 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 12 13:09:35.817185 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 12 13:09:35.817192 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 12 13:09:35.817200 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 12 13:09:35.817208 kernel: iommu: Default domain type: Translated May 12 13:09:35.817215 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 12 13:09:35.817223 kernel: PCI: Using ACPI for IRQ routing May 12 13:09:35.817231 kernel: PCI: pci_cache_line_size set to 64 bytes May 12 13:09:35.817242 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 12 13:09:35.817250 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 12 13:09:35.817407 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 12 13:09:35.817531 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 12 13:09:35.817643 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 12 13:09:35.817654 kernel: vgaarb: loaded May 12 13:09:35.817662 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 12 13:09:35.817669 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 12 13:09:35.817681 kernel: clocksource: Switched to clocksource kvm-clock May 12 13:09:35.817722 kernel: VFS: Disk quotas dquot_6.6.0 May 12 13:09:35.817730 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 12 13:09:35.817738 kernel: pnp: PnP ACPI init May 12 13:09:35.817863 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 12 13:09:35.817874 kernel: pnp: PnP ACPI: found 6 devices May 12 13:09:35.817882 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 12 13:09:35.817890 kernel: NET: Registered PF_INET protocol family May 12 13:09:35.817901 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 12 13:09:35.817909 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 12 13:09:35.817917 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 12 13:09:35.817925 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 12 13:09:35.817933 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 12 13:09:35.817940 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 12 13:09:35.817948 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 13:09:35.820110 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 13:09:35.820146 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 12 13:09:35.820160 kernel: NET: Registered PF_XDP protocol family May 12 13:09:35.820324 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 12 13:09:35.820433 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 12 13:09:35.820538 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 12 13:09:35.820641 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 12 13:09:35.820745 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 12 13:09:35.820849 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 12 13:09:35.820859 kernel: PCI: CLS 0 bytes, default 64 May 12 13:09:35.820872 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 12 13:09:35.820881 kernel: Initialise system trusted keyrings May 12 13:09:35.820889 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 12 13:09:35.820897 kernel: Key type asymmetric registered May 12 13:09:35.820905 kernel: Asymmetric key parser 'x509' registered May 12 13:09:35.820913 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 12 13:09:35.820921 kernel: io scheduler mq-deadline registered May 12 13:09:35.820929 kernel: io scheduler kyber registered May 12 13:09:35.820937 kernel: io scheduler bfq registered May 12 13:09:35.820948 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 12 13:09:35.820957 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 12 13:09:35.820965 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 12 13:09:35.820973 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 12 13:09:35.820981 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 12 13:09:35.820989 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 12 13:09:35.820998 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 12 13:09:35.821006 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 12 13:09:35.821014 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 12 13:09:35.821217 kernel: rtc_cmos 00:04: RTC can wake from S4 May 12 13:09:35.821330 kernel: rtc_cmos 00:04: registered as rtc0 May 12 13:09:35.821437 kernel: rtc_cmos 00:04: setting system clock to 2025-05-12T13:09:35 UTC (1747055375) May 12 13:09:35.821544 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 12 13:09:35.821555 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 12 13:09:35.821564 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 12 13:09:35.821572 kernel: NET: Registered PF_INET6 protocol family May 12 13:09:35.821580 kernel: Segment Routing with IPv6 May 12 13:09:35.821592 kernel: In-situ OAM (IOAM) with IPv6 May 12 13:09:35.821601 kernel: NET: Registered PF_PACKET protocol family May 12 13:09:35.821609 kernel: Key type dns_resolver registered May 12 13:09:35.821616 kernel: IPI shorthand broadcast: enabled May 12 13:09:35.821625 kernel: sched_clock: Marking stable (2729002996, 112665833)->(2858327114, -16658285) May 12 13:09:35.821633 kernel: registered taskstats version 1 May 12 13:09:35.821641 kernel: Loading compiled-in X.509 certificates May 12 13:09:35.821649 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: b9e2aafc69feb0dbc79bc145148811c017b1afb7' May 12 13:09:35.821657 kernel: Demotion targets for Node 0: null May 12 13:09:35.821668 kernel: Key type .fscrypt registered May 12 13:09:35.821676 kernel: Key type fscrypt-provisioning registered May 12 13:09:35.821684 kernel: ima: No TPM chip found, activating TPM-bypass! May 12 13:09:35.821692 kernel: ima: Allocated hash algorithm: sha1 May 12 13:09:35.821700 kernel: ima: No architecture policies found May 12 13:09:35.821708 kernel: clk: Disabling unused clocks May 12 13:09:35.821716 kernel: Warning: unable to open an initial console. May 12 13:09:35.821725 kernel: Freeing unused kernel image (initmem) memory: 54416K May 12 13:09:35.821733 kernel: Write protecting the kernel read-only data: 24576k May 12 13:09:35.821744 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 12 13:09:35.821752 kernel: Run /init as init process May 12 13:09:35.821760 kernel: with arguments: May 12 13:09:35.821768 kernel: /init May 12 13:09:35.821776 kernel: with environment: May 12 13:09:35.821784 kernel: HOME=/ May 12 13:09:35.821792 kernel: TERM=linux May 12 13:09:35.821800 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 12 13:09:35.821810 systemd[1]: Successfully made /usr/ read-only. May 12 13:09:35.821835 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 13:09:35.821847 systemd[1]: Detected virtualization kvm. May 12 13:09:35.821855 systemd[1]: Detected architecture x86-64. May 12 13:09:35.821864 systemd[1]: Running in initrd. May 12 13:09:35.821872 systemd[1]: No hostname configured, using default hostname. May 12 13:09:35.821884 systemd[1]: Hostname set to . May 12 13:09:35.821892 systemd[1]: Initializing machine ID from VM UUID. May 12 13:09:35.821901 systemd[1]: Queued start job for default target initrd.target. May 12 13:09:35.821911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:09:35.821922 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:09:35.821933 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 12 13:09:35.821943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 13:09:35.821952 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 12 13:09:35.821963 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 12 13:09:35.821974 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 12 13:09:35.821982 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 12 13:09:35.821991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:09:35.822000 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 13:09:35.822009 systemd[1]: Reached target paths.target - Path Units. May 12 13:09:35.822017 systemd[1]: Reached target slices.target - Slice Units. May 12 13:09:35.822028 systemd[1]: Reached target swap.target - Swaps. May 12 13:09:35.822037 systemd[1]: Reached target timers.target - Timer Units. May 12 13:09:35.822048 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 12 13:09:35.822070 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 13:09:35.822080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 13:09:35.822088 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 12 13:09:35.822097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 13:09:35.822106 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 13:09:35.822118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:09:35.822126 systemd[1]: Reached target sockets.target - Socket Units. May 12 13:09:35.822135 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 12 13:09:35.822144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 13:09:35.822156 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 12 13:09:35.822165 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 12 13:09:35.822184 systemd[1]: Starting systemd-fsck-usr.service... May 12 13:09:35.822193 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 13:09:35.822202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 13:09:35.822210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:09:35.822219 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 12 13:09:35.822231 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:09:35.822241 systemd[1]: Finished systemd-fsck-usr.service. May 12 13:09:35.822269 systemd-journald[220]: Collecting audit messages is disabled. May 12 13:09:35.822296 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 13:09:35.822306 systemd-journald[220]: Journal started May 12 13:09:35.822325 systemd-journald[220]: Runtime Journal (/run/log/journal/e601f1d20e7148d98d9f139b2f8da047) is 6M, max 48.6M, 42.5M free. May 12 13:09:35.811436 systemd-modules-load[221]: Inserted module 'overlay' May 12 13:09:35.853962 systemd[1]: Started systemd-journald.service - Journal Service. May 12 13:09:35.854006 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 12 13:09:35.854022 kernel: Bridge firewalling registered May 12 13:09:35.838145 systemd-modules-load[221]: Inserted module 'br_netfilter' May 12 13:09:35.854288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 13:09:35.856291 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:09:35.857677 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 13:09:35.861472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 13:09:35.864450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 13:09:35.874661 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 13:09:35.875428 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 13:09:35.885670 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 12 13:09:35.886416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 13:09:35.887155 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:09:35.890834 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:09:35.893070 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 13:09:35.907272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 13:09:35.908162 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 12 13:09:35.934140 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62c60c65dace52c1f99bbb1aca7fdf1720d975c5bba458cbf5745d5f40d8ca41 May 12 13:09:35.941906 systemd-resolved[256]: Positive Trust Anchors: May 12 13:09:35.941921 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 13:09:35.941954 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 13:09:35.944358 systemd-resolved[256]: Defaulting to hostname 'linux'. May 12 13:09:35.945334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 13:09:35.951270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 13:09:36.052091 kernel: SCSI subsystem initialized May 12 13:09:36.061085 kernel: Loading iSCSI transport class v2.0-870. May 12 13:09:36.072095 kernel: iscsi: registered transport (tcp) May 12 13:09:36.093092 kernel: iscsi: registered transport (qla4xxx) May 12 13:09:36.093115 kernel: QLogic iSCSI HBA Driver May 12 13:09:36.114674 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 13:09:36.142303 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:09:36.143962 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 13:09:36.203239 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 12 13:09:36.206035 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 12 13:09:36.267094 kernel: raid6: avx2x4 gen() 30057 MB/s May 12 13:09:36.284081 kernel: raid6: avx2x2 gen() 30703 MB/s May 12 13:09:36.301198 kernel: raid6: avx2x1 gen() 25454 MB/s May 12 13:09:36.301220 kernel: raid6: using algorithm avx2x2 gen() 30703 MB/s May 12 13:09:36.319174 kernel: raid6: .... xor() 19789 MB/s, rmw enabled May 12 13:09:36.319212 kernel: raid6: using avx2x2 recovery algorithm May 12 13:09:36.339087 kernel: xor: automatically using best checksumming function avx May 12 13:09:36.502099 kernel: Btrfs loaded, zoned=no, fsverity=no May 12 13:09:36.510115 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 12 13:09:36.512878 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:09:36.547783 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 12 13:09:36.552930 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:09:36.555648 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 12 13:09:36.582889 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation May 12 13:09:36.611344 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 12 13:09:36.613873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 13:09:36.685867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:09:36.689469 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 12 13:09:36.718120 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 12 13:09:36.775317 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 12 13:09:36.775473 kernel: cryptd: max_cpu_qlen set to 1000 May 12 13:09:36.775491 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 12 13:09:36.775502 kernel: GPT:9289727 != 19775487 May 12 13:09:36.775512 kernel: GPT:Alternate GPT header not at the end of the disk. May 12 13:09:36.775522 kernel: GPT:9289727 != 19775487 May 12 13:09:36.775532 kernel: GPT: Use GNU Parted to correct GPT errors. May 12 13:09:36.775542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:09:36.775552 kernel: AES CTR mode by8 optimization enabled May 12 13:09:36.775562 kernel: libata version 3.00 loaded. May 12 13:09:36.775572 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 12 13:09:36.775585 kernel: ahci 0000:00:1f.2: version 3.0 May 12 13:09:36.792906 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 12 13:09:36.792921 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 12 13:09:36.793081 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 12 13:09:36.793226 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 12 13:09:36.793357 kernel: scsi host0: ahci May 12 13:09:36.793501 kernel: scsi host1: ahci May 12 13:09:36.793651 kernel: scsi host2: ahci May 12 13:09:36.793794 kernel: scsi host3: ahci May 12 13:09:36.793930 kernel: scsi host4: ahci May 12 13:09:36.794081 kernel: scsi host5: ahci May 12 13:09:36.794228 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 12 13:09:36.794240 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 12 13:09:36.794254 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 12 13:09:36.794264 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 12 13:09:36.794274 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 12 13:09:36.794284 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 12 13:09:36.773853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 13:09:36.773974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:09:36.775464 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:09:36.777534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:09:36.811749 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 12 13:09:36.825634 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 12 13:09:36.854470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:09:36.862883 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 12 13:09:36.864165 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 12 13:09:36.873698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 13:09:36.875652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 12 13:09:36.899802 disk-uuid[636]: Primary Header is updated. May 12 13:09:36.899802 disk-uuid[636]: Secondary Entries is updated. May 12 13:09:36.899802 disk-uuid[636]: Secondary Header is updated. May 12 13:09:36.904102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:09:36.908093 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:09:37.103086 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 12 13:09:37.103147 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 12 13:09:37.104082 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 12 13:09:37.105085 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 12 13:09:37.105102 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 12 13:09:37.106085 kernel: ata3.00: applying bridge limits May 12 13:09:37.107088 kernel: ata3.00: configured for UDMA/100 May 12 13:09:37.107105 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 12 13:09:37.112089 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 12 13:09:37.112150 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 12 13:09:37.154095 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 12 13:09:37.179850 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 12 13:09:37.179871 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 12 13:09:37.637091 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 12 13:09:37.639812 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 12 13:09:37.642279 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:09:37.644599 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 13:09:37.647471 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 12 13:09:37.680399 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 12 13:09:37.909082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:09:37.910490 disk-uuid[637]: The operation has completed successfully. May 12 13:09:37.941090 systemd[1]: disk-uuid.service: Deactivated successfully. May 12 13:09:37.941231 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 12 13:09:37.974004 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 12 13:09:37.998254 sh[666]: Success May 12 13:09:38.016984 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 12 13:09:38.017082 kernel: device-mapper: uevent: version 1.0.3 May 12 13:09:38.017097 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 12 13:09:38.026092 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 12 13:09:38.056732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 12 13:09:38.060225 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 12 13:09:38.077286 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 12 13:09:38.085669 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 12 13:09:38.085723 kernel: BTRFS: device fsid d9149e4e-f32a-4a44-9d1b-bc5d4da8af15 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (678) May 12 13:09:38.086083 kernel: BTRFS info (device dm-0): first mount of filesystem d9149e4e-f32a-4a44-9d1b-bc5d4da8af15 May 12 13:09:38.088600 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 12 13:09:38.088618 kernel: BTRFS info (device dm-0): using free-space-tree May 12 13:09:38.092853 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 12 13:09:38.094233 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 12 13:09:38.095730 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 12 13:09:38.096488 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 12 13:09:38.098304 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 12 13:09:38.135772 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (711) May 12 13:09:38.135818 kernel: BTRFS info (device vda6): first mount of filesystem af2cb67a-ac4a-45f1-b390-c5a730521ef5 May 12 13:09:38.135830 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 12 13:09:38.137297 kernel: BTRFS info (device vda6): using free-space-tree May 12 13:09:38.144086 kernel: BTRFS info (device vda6): last unmount of filesystem af2cb67a-ac4a-45f1-b390-c5a730521ef5 May 12 13:09:38.144584 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 12 13:09:38.145908 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 12 13:09:38.222252 ignition[750]: Ignition 2.21.0 May 12 13:09:38.222609 ignition[750]: Stage: fetch-offline May 12 13:09:38.222642 ignition[750]: no configs at "/usr/lib/ignition/base.d" May 12 13:09:38.222651 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:09:38.222731 ignition[750]: parsed url from cmdline: "" May 12 13:09:38.222735 ignition[750]: no config URL provided May 12 13:09:38.222740 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" May 12 13:09:38.222748 ignition[750]: no config at "/usr/lib/ignition/user.ign" May 12 13:09:38.222767 ignition[750]: op(1): [started] loading QEMU firmware config module May 12 13:09:38.222772 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" May 12 13:09:38.231892 ignition[750]: op(1): [finished] loading QEMU firmware config module May 12 13:09:38.240877 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 13:09:38.243490 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 13:09:38.275532 ignition[750]: parsing config with SHA512: 77cb6c6ef60cab05145f38a85cd1ce4e38939546f323d341349e380924807b4d6458b938fda20b548800ded7f2341c1dd8c7e50101761d1074e66b24f41d8fea May 12 13:09:38.281217 unknown[750]: fetched base config from "system" May 12 13:09:38.281230 unknown[750]: fetched user config from "qemu" May 12 13:09:38.283095 ignition[750]: fetch-offline: fetch-offline passed May 12 13:09:38.283920 ignition[750]: Ignition finished successfully May 12 13:09:38.284982 systemd-networkd[856]: lo: Link UP May 12 13:09:38.284993 systemd-networkd[856]: lo: Gained carrier May 12 13:09:38.286490 systemd-networkd[856]: Enumeration completed May 12 13:09:38.286906 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:09:38.286911 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 13:09:38.288677 systemd-networkd[856]: eth0: Link UP May 12 13:09:38.288681 systemd-networkd[856]: eth0: Gained carrier May 12 13:09:38.288689 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:09:38.290441 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 13:09:38.291803 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 12 13:09:38.295087 systemd[1]: Reached target network.target - Network. May 12 13:09:38.297084 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 12 13:09:38.299987 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 12 13:09:38.317113 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 13:09:38.331673 ignition[860]: Ignition 2.21.0 May 12 13:09:38.331690 ignition[860]: Stage: kargs May 12 13:09:38.331836 ignition[860]: no configs at "/usr/lib/ignition/base.d" May 12 13:09:38.331847 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:09:38.334116 ignition[860]: kargs: kargs passed May 12 13:09:38.334192 ignition[860]: Ignition finished successfully May 12 13:09:38.339372 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 12 13:09:38.340429 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 12 13:09:38.369684 ignition[869]: Ignition 2.21.0 May 12 13:09:38.369698 ignition[869]: Stage: disks May 12 13:09:38.369819 ignition[869]: no configs at "/usr/lib/ignition/base.d" May 12 13:09:38.369828 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:09:38.371427 ignition[869]: disks: disks passed May 12 13:09:38.371475 ignition[869]: Ignition finished successfully May 12 13:09:38.373943 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 12 13:09:38.374899 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 12 13:09:38.376559 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 13:09:38.376891 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 13:09:38.377405 systemd[1]: Reached target sysinit.target - System Initialization. May 12 13:09:38.377739 systemd[1]: Reached target basic.target - Basic System. May 12 13:09:38.378926 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 12 13:09:38.411045 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 12 13:09:38.419648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 12 13:09:38.422150 systemd[1]: Mounting sysroot.mount - /sysroot... May 12 13:09:38.528089 kernel: EXT4-fs (vda9): mounted filesystem a3ce2c9c-b487-4c15-a3fc-a7229582ef9b r/w with ordered data mode. Quota mode: none. May 12 13:09:38.528274 systemd[1]: Mounted sysroot.mount - /sysroot. May 12 13:09:38.529661 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 12 13:09:38.532146 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 13:09:38.533984 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 12 13:09:38.535050 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 12 13:09:38.535110 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 12 13:09:38.535132 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 12 13:09:38.546145 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 12 13:09:38.551237 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (887) May 12 13:09:38.548923 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 12 13:09:38.555016 kernel: BTRFS info (device vda6): first mount of filesystem af2cb67a-ac4a-45f1-b390-c5a730521ef5 May 12 13:09:38.555038 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 12 13:09:38.555086 kernel: BTRFS info (device vda6): using free-space-tree May 12 13:09:38.559095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 13:09:38.589510 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory May 12 13:09:38.595119 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory May 12 13:09:38.600190 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory May 12 13:09:38.605201 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory May 12 13:09:38.689237 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 12 13:09:38.691373 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 12 13:09:38.693130 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 12 13:09:38.715100 kernel: BTRFS info (device vda6): last unmount of filesystem af2cb67a-ac4a-45f1-b390-c5a730521ef5 May 12 13:09:38.727229 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 12 13:09:38.742474 ignition[1002]: INFO : Ignition 2.21.0 May 12 13:09:38.742474 ignition[1002]: INFO : Stage: mount May 12 13:09:38.744227 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:09:38.744227 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:09:38.747206 ignition[1002]: INFO : mount: mount passed May 12 13:09:38.747206 ignition[1002]: INFO : Ignition finished successfully May 12 13:09:38.749965 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 12 13:09:38.752161 systemd[1]: Starting ignition-files.service - Ignition (files)... May 12 13:09:39.084742 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 12 13:09:39.086219 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 13:09:39.116373 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1013) May 12 13:09:39.116401 kernel: BTRFS info (device vda6): first mount of filesystem af2cb67a-ac4a-45f1-b390-c5a730521ef5 May 12 13:09:39.116413 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 12 13:09:39.118080 kernel: BTRFS info (device vda6): using free-space-tree May 12 13:09:39.120720 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 13:09:39.147673 ignition[1030]: INFO : Ignition 2.21.0 May 12 13:09:39.147673 ignition[1030]: INFO : Stage: files May 12 13:09:39.149413 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:09:39.149413 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:09:39.152425 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping May 12 13:09:39.153735 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 12 13:09:39.153735 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 12 13:09:39.157977 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 12 13:09:39.159458 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 12 13:09:39.161145 unknown[1030]: wrote ssh authorized keys file for user: core May 12 13:09:39.162284 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 12 13:09:39.163647 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 12 13:09:39.163647 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 12 13:09:39.231735 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 12 13:09:39.640703 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 12 13:09:39.642727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 12 13:09:39.642727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 12 13:09:39.642727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 12 13:09:39.642727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 12 13:09:39.642727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 13:09:39.642727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 13:09:39.642727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 13:09:39.642727 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 13:09:39.656894 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 12 13:09:39.656894 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 12 13:09:39.656894 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 12 13:09:39.656894 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 12 13:09:39.656894 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 12 13:09:39.656894 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 12 13:09:40.105008 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 12 13:09:40.264227 systemd-networkd[856]: eth0: Gained IPv6LL May 12 13:09:40.469936 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 12 13:09:40.469936 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 12 13:09:40.474301 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 13:09:40.476341 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 13:09:40.476341 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 12 13:09:40.476341 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 12 13:09:40.476341 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 13:09:40.476341 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 13:09:40.476341 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 12 13:09:40.476341 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 12 13:09:40.503662 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 12 13:09:40.507466 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 12 13:09:40.509168 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 12 13:09:40.509168 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 12 13:09:40.509168 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 12 13:09:40.509168 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 12 13:09:40.509168 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 12 13:09:40.509168 ignition[1030]: INFO : files: files passed May 12 13:09:40.509168 ignition[1030]: INFO : Ignition finished successfully May 12 13:09:40.514466 systemd[1]: Finished ignition-files.service - Ignition (files). May 12 13:09:40.516326 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 12 13:09:40.521028 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 12 13:09:40.536343 systemd[1]: ignition-quench.service: Deactivated successfully. May 12 13:09:40.536456 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 12 13:09:40.539470 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory May 12 13:09:40.542982 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 13:09:40.544689 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 12 13:09:40.547444 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 13:09:40.545224 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 13:09:40.547949 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 12 13:09:40.550927 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 12 13:09:40.607074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 12 13:09:40.607196 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 12 13:09:40.609571 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 12 13:09:40.610655 systemd[1]: Reached target initrd.target - Initrd Default Target. May 12 13:09:40.611014 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 12 13:09:40.614431 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 12 13:09:40.638480 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 13:09:40.639780 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 12 13:09:40.663958 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 12 13:09:40.664129 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:09:40.667398 systemd[1]: Stopped target timers.target - Timer Units. May 12 13:09:40.668548 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 12 13:09:40.668653 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 13:09:40.673295 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 12 13:09:40.673426 systemd[1]: Stopped target basic.target - Basic System. May 12 13:09:40.673760 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 12 13:09:40.674118 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 12 13:09:40.674606 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 12 13:09:40.674937 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 12 13:09:40.675448 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 12 13:09:40.675780 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 12 13:09:40.676153 systemd[1]: Stopped target sysinit.target - System Initialization. May 12 13:09:40.676623 systemd[1]: Stopped target local-fs.target - Local File Systems. May 12 13:09:40.676950 systemd[1]: Stopped target swap.target - Swaps. May 12 13:09:40.677428 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 12 13:09:40.677534 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 12 13:09:40.694373 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 12 13:09:40.694745 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:09:40.695045 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 12 13:09:40.701414 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:09:40.703880 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 12 13:09:40.703987 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 12 13:09:40.706825 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 12 13:09:40.706940 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 12 13:09:40.708032 systemd[1]: Stopped target paths.target - Path Units. May 12 13:09:40.708452 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 12 13:09:40.715186 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:09:40.717989 systemd[1]: Stopped target slices.target - Slice Units. May 12 13:09:40.718168 systemd[1]: Stopped target sockets.target - Socket Units. May 12 13:09:40.719869 systemd[1]: iscsid.socket: Deactivated successfully. May 12 13:09:40.719969 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 12 13:09:40.722542 systemd[1]: iscsiuio.socket: Deactivated successfully. May 12 13:09:40.722625 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 13:09:40.723443 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 12 13:09:40.723572 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 13:09:40.725270 systemd[1]: ignition-files.service: Deactivated successfully. May 12 13:09:40.725374 systemd[1]: Stopped ignition-files.service - Ignition (files). May 12 13:09:40.730218 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 12 13:09:40.731375 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 12 13:09:40.731486 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:09:40.732633 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 12 13:09:40.735953 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 12 13:09:40.736093 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:09:40.737033 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 12 13:09:40.737152 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 12 13:09:40.747176 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 12 13:09:40.747290 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 12 13:09:40.759074 ignition[1086]: INFO : Ignition 2.21.0 May 12 13:09:40.759074 ignition[1086]: INFO : Stage: umount May 12 13:09:40.759074 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:09:40.759074 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:09:40.763109 ignition[1086]: INFO : umount: umount passed May 12 13:09:40.763109 ignition[1086]: INFO : Ignition finished successfully May 12 13:09:40.763226 systemd[1]: ignition-mount.service: Deactivated successfully. May 12 13:09:40.763362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 12 13:09:40.764244 systemd[1]: Stopped target network.target - Network. May 12 13:09:40.766614 systemd[1]: ignition-disks.service: Deactivated successfully. May 12 13:09:40.766667 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 12 13:09:40.767834 systemd[1]: ignition-kargs.service: Deactivated successfully. May 12 13:09:40.767901 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 12 13:09:40.770202 systemd[1]: ignition-setup.service: Deactivated successfully. May 12 13:09:40.770257 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 12 13:09:40.772750 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 12 13:09:40.772795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 12 13:09:40.773327 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 12 13:09:40.773720 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 12 13:09:40.775839 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 12 13:09:40.781142 systemd[1]: systemd-resolved.service: Deactivated successfully. May 12 13:09:40.781268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 12 13:09:40.785457 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 12 13:09:40.785737 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 12 13:09:40.785781 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:09:40.789467 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 12 13:09:40.796990 systemd[1]: systemd-networkd.service: Deactivated successfully. May 12 13:09:40.797144 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 12 13:09:40.800924 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 12 13:09:40.801180 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 12 13:09:40.802263 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 12 13:09:40.802301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 12 13:09:40.807556 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 12 13:09:40.809484 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 12 13:09:40.809539 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 13:09:40.809773 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 13:09:40.809813 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 13:09:40.814664 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 12 13:09:40.814708 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 12 13:09:40.815867 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:09:40.817129 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 12 13:09:40.840636 systemd[1]: systemd-udevd.service: Deactivated successfully. May 12 13:09:40.840814 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:09:40.844117 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 12 13:09:40.844197 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 12 13:09:40.845257 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 12 13:09:40.845292 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:09:40.845561 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 12 13:09:40.845603 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 12 13:09:40.846388 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 12 13:09:40.846433 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 12 13:09:40.847098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 13:09:40.847145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 13:09:40.848635 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 12 13:09:40.858267 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 12 13:09:40.858320 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:09:40.862721 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 12 13:09:40.862764 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:09:40.866308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 13:09:40.866358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:09:40.870186 systemd[1]: network-cleanup.service: Deactivated successfully. May 12 13:09:40.870291 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 12 13:09:40.877223 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 12 13:09:40.877334 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 12 13:09:40.925495 systemd[1]: sysroot-boot.service: Deactivated successfully. May 12 13:09:40.925633 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 12 13:09:40.926816 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 12 13:09:40.928256 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 12 13:09:40.928311 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 12 13:09:40.931991 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 12 13:09:40.949660 systemd[1]: Switching root. May 12 13:09:40.986345 systemd-journald[220]: Journal stopped May 12 13:09:42.480448 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 12 13:09:42.480519 kernel: SELinux: policy capability network_peer_controls=1 May 12 13:09:42.480541 kernel: SELinux: policy capability open_perms=1 May 12 13:09:42.480555 kernel: SELinux: policy capability extended_socket_class=1 May 12 13:09:42.480582 kernel: SELinux: policy capability always_check_network=0 May 12 13:09:42.480596 kernel: SELinux: policy capability cgroup_seclabel=1 May 12 13:09:42.480612 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 12 13:09:42.480630 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 12 13:09:42.480644 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 12 13:09:42.480664 kernel: SELinux: policy capability userspace_initial_context=0 May 12 13:09:42.480679 kernel: audit: type=1403 audit(1747055381.457:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 12 13:09:42.480694 systemd[1]: Successfully loaded SELinux policy in 47.328ms. May 12 13:09:42.480719 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.440ms. May 12 13:09:42.480736 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 13:09:42.480752 systemd[1]: Detected virtualization kvm. May 12 13:09:42.480770 systemd[1]: Detected architecture x86-64. May 12 13:09:42.480785 systemd[1]: Detected first boot. May 12 13:09:42.480802 systemd[1]: Initializing machine ID from VM UUID. May 12 13:09:42.480817 zram_generator::config[1131]: No configuration found. May 12 13:09:42.480834 kernel: Guest personality initialized and is inactive May 12 13:09:42.480849 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 12 13:09:42.480863 kernel: Initialized host personality May 12 13:09:42.480878 kernel: NET: Registered PF_VSOCK protocol family May 12 13:09:42.480893 systemd[1]: Populated /etc with preset unit settings. May 12 13:09:42.480912 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 12 13:09:42.480927 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 12 13:09:42.480943 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 12 13:09:42.480959 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 12 13:09:42.480975 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 12 13:09:42.480997 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 12 13:09:42.481013 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 12 13:09:42.481028 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 12 13:09:42.481044 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 12 13:09:42.481280 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 12 13:09:42.481300 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 12 13:09:42.482255 systemd[1]: Created slice user.slice - User and Session Slice. May 12 13:09:42.482283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:09:42.482300 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:09:42.482323 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 12 13:09:42.482339 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 12 13:09:42.482356 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 12 13:09:42.482377 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 13:09:42.482393 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 12 13:09:42.482409 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:09:42.482425 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 13:09:42.482440 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 12 13:09:42.482456 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 12 13:09:42.482678 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 12 13:09:42.482695 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 12 13:09:42.482713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:09:42.482730 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 13:09:42.482746 systemd[1]: Reached target slices.target - Slice Units. May 12 13:09:42.482761 systemd[1]: Reached target swap.target - Swaps. May 12 13:09:42.482776 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 12 13:09:42.482792 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 12 13:09:42.482808 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 12 13:09:42.482824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 13:09:42.482839 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 13:09:42.483088 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:09:42.483106 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 12 13:09:42.483123 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 12 13:09:42.483139 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 12 13:09:42.483154 systemd[1]: Mounting media.mount - External Media Directory... May 12 13:09:42.483171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 13:09:42.483187 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 12 13:09:42.483202 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 12 13:09:42.483217 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 12 13:09:42.483236 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 12 13:09:42.485899 systemd[1]: Reached target machines.target - Containers. May 12 13:09:42.485925 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 12 13:09:42.485943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:09:42.485960 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 13:09:42.485976 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 12 13:09:42.486004 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:09:42.486021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 13:09:42.486041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:09:42.486088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 12 13:09:42.486120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:09:42.486139 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 12 13:09:42.486155 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 12 13:09:42.486171 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 12 13:09:42.486187 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 12 13:09:42.486204 systemd[1]: Stopped systemd-fsck-usr.service. May 12 13:09:42.486221 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:09:42.486241 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 13:09:42.486257 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 13:09:42.486273 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 13:09:42.486289 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 12 13:09:42.486306 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 12 13:09:42.486322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 13:09:42.486342 systemd[1]: verity-setup.service: Deactivated successfully. May 12 13:09:42.486358 systemd[1]: Stopped verity-setup.service. May 12 13:09:42.486375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 13:09:42.486391 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 12 13:09:42.486407 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 12 13:09:42.486424 systemd[1]: Mounted media.mount - External Media Directory. May 12 13:09:42.486440 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 12 13:09:42.486458 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 12 13:09:42.486503 systemd-journald[1209]: Collecting audit messages is disabled. May 12 13:09:42.486543 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 12 13:09:42.486560 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 12 13:09:42.486582 systemd-journald[1209]: Journal started May 12 13:09:42.486612 systemd-journald[1209]: Runtime Journal (/run/log/journal/e601f1d20e7148d98d9f139b2f8da047) is 6M, max 48.6M, 42.5M free. May 12 13:09:41.970790 systemd[1]: Queued start job for default target multi-user.target. May 12 13:09:41.992893 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 12 13:09:41.993381 systemd[1]: systemd-journald.service: Deactivated successfully. May 12 13:09:42.496147 systemd[1]: Started systemd-journald.service - Journal Service. May 12 13:09:42.500988 kernel: fuse: init (API version 7.41) May 12 13:09:42.501170 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:09:42.503187 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 12 13:09:42.510206 kernel: loop: module loaded May 12 13:09:42.510144 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 12 13:09:42.514716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:09:42.514962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:09:42.517491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:09:42.517719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:09:42.519770 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 12 13:09:42.520101 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 12 13:09:42.521764 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:09:42.522021 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:09:42.523690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 13:09:42.525367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:09:42.527581 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 12 13:09:42.531815 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 12 13:09:42.566215 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 13:09:42.584151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 12 13:09:42.591088 kernel: ACPI: bus type drm_connector registered May 12 13:09:42.598049 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 12 13:09:42.611204 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 12 13:09:42.611266 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 13:09:42.614010 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 12 13:09:42.617320 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 12 13:09:42.619370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:09:42.622283 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 12 13:09:42.635206 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 12 13:09:42.642219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 13:09:42.644829 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 12 13:09:42.645108 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 13:09:42.648189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 13:09:42.649665 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 12 13:09:42.669188 systemd-journald[1209]: Time spent on flushing to /var/log/journal/e601f1d20e7148d98d9f139b2f8da047 is 30.648ms for 971 entries. May 12 13:09:42.669188 systemd-journald[1209]: System Journal (/var/log/journal/e601f1d20e7148d98d9f139b2f8da047) is 8M, max 195.6M, 187.6M free. May 12 13:09:42.735005 systemd-journald[1209]: Received client request to flush runtime journal. May 12 13:09:42.735088 kernel: loop0: detected capacity change from 0 to 113872 May 12 13:09:42.658716 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 12 13:09:42.670495 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 13:09:42.670751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 13:09:42.677420 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:09:42.689174 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 12 13:09:42.694498 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 12 13:09:42.701143 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 12 13:09:42.718825 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 13:09:42.729956 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 12 13:09:42.737079 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 12 13:09:42.744966 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 12 13:09:42.775881 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 12 13:09:42.782299 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 13:09:42.813120 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 12 13:09:42.824894 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 12 13:09:42.856968 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 12 13:09:42.857390 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 12 13:09:42.864121 kernel: loop1: detected capacity change from 0 to 146240 May 12 13:09:42.870666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:09:42.922104 kernel: loop2: detected capacity change from 0 to 210664 May 12 13:09:42.984656 kernel: loop3: detected capacity change from 0 to 113872 May 12 13:09:42.995385 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 12 13:09:43.014097 kernel: loop4: detected capacity change from 0 to 146240 May 12 13:09:43.041097 kernel: loop5: detected capacity change from 0 to 210664 May 12 13:09:43.062699 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 12 13:09:43.064117 (sd-merge)[1273]: Merged extensions into '/usr'. May 12 13:09:43.069280 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... May 12 13:09:43.069398 systemd[1]: Reloading... May 12 13:09:43.142093 zram_generator::config[1299]: No configuration found. May 12 13:09:43.285433 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:09:43.344725 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 12 13:09:43.400415 systemd[1]: Reloading finished in 330 ms. May 12 13:09:43.441508 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 12 13:09:43.444344 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 12 13:09:43.470539 systemd[1]: Starting ensure-sysext.service... May 12 13:09:43.473465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 13:09:43.486939 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 12 13:09:43.486973 systemd[1]: Reloading... May 12 13:09:43.518940 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 12 13:09:43.518992 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 12 13:09:43.519369 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 12 13:09:43.519673 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 12 13:09:43.520774 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 12 13:09:43.521142 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 12 13:09:43.521225 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 12 13:09:43.538664 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 12 13:09:43.538681 systemd-tmpfiles[1337]: Skipping /boot May 12 13:09:43.556102 zram_generator::config[1364]: No configuration found. May 12 13:09:43.560695 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 12 13:09:43.560875 systemd-tmpfiles[1337]: Skipping /boot May 12 13:09:43.702724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:09:43.831556 systemd[1]: Reloading finished in 344 ms. May 12 13:09:43.879592 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 12 13:09:43.915729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:09:43.925912 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 13:09:43.931153 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 12 13:09:43.939587 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 12 13:09:43.944155 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 13:09:43.947073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:09:43.950458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 12 13:09:43.955616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 13:09:43.955783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:09:43.956854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:09:43.965232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:09:43.968637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:09:43.971242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:09:43.971381 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:09:43.975433 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 12 13:09:43.976682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 13:09:43.978577 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:09:43.979430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:09:43.982083 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 12 13:09:43.984344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:09:43.984600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:09:43.986773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:09:43.987044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:09:44.001374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 13:09:44.001749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 13:09:44.005813 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 12 13:09:44.006512 augenrules[1436]: No rules May 12 13:09:44.008292 systemd[1]: audit-rules.service: Deactivated successfully. May 12 13:09:44.009086 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 13:09:44.011105 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 12 13:09:44.013641 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 12 13:09:44.025858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 13:09:44.030336 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 13:09:44.031651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:09:44.035275 systemd-udevd[1407]: Using default interface naming scheme 'v255'. May 12 13:09:44.041880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:09:44.044367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 13:09:44.052875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:09:44.059454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:09:44.060875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:09:44.061002 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:09:44.061173 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 13:09:44.061259 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 13:09:44.062770 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 12 13:09:44.063527 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 12 13:09:44.068823 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:09:44.070230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:09:44.071124 augenrules[1445]: /sbin/augenrules: No change May 12 13:09:44.071745 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:09:44.073608 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 13:09:44.073817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 13:09:44.075610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:09:44.075811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:09:44.077769 augenrules[1475]: No rules May 12 13:09:44.077565 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:09:44.077762 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:09:44.079615 systemd[1]: audit-rules.service: Deactivated successfully. May 12 13:09:44.079861 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 13:09:44.086151 systemd[1]: Finished ensure-sysext.service. May 12 13:09:44.099895 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 13:09:44.101124 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 13:09:44.101165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 13:09:44.104160 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 12 13:09:44.171397 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 12 13:09:44.210284 systemd-resolved[1406]: Positive Trust Anchors: May 12 13:09:44.210623 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 13:09:44.210740 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 13:09:44.214630 systemd-resolved[1406]: Defaulting to hostname 'linux'. May 12 13:09:44.218353 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 13:09:44.219894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 13:09:44.234903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 13:09:44.237992 systemd-networkd[1501]: lo: Link UP May 12 13:09:44.238474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 12 13:09:44.241114 systemd-networkd[1501]: lo: Gained carrier May 12 13:09:44.242860 systemd-networkd[1501]: Enumeration completed May 12 13:09:44.242967 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 13:09:44.243274 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:09:44.243278 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 13:09:44.244097 systemd-networkd[1501]: eth0: Link UP May 12 13:09:44.244322 systemd-networkd[1501]: eth0: Gained carrier May 12 13:09:44.244383 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:09:44.247013 systemd[1]: Reached target network.target - Network. May 12 13:09:44.253074 kernel: mousedev: PS/2 mouse device common for all mice May 12 13:09:44.255800 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 12 13:09:44.257083 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 12 13:09:44.258177 systemd-networkd[1501]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 13:09:44.261287 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 12 13:09:44.266095 kernel: ACPI: button: Power Button [PWRF] May 12 13:09:44.276222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 12 13:09:43.860155 systemd-resolved[1406]: Clock change detected. Flushing caches. May 12 13:09:43.869102 systemd-journald[1209]: Time jumped backwards, rotating. May 12 13:09:43.869168 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 12 13:09:43.869409 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 12 13:09:43.860178 systemd-timesyncd[1506]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 12 13:09:43.860224 systemd-timesyncd[1506]: Initial clock synchronization to Mon 2025-05-12 13:09:43.860103 UTC. May 12 13:09:43.861718 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 12 13:09:43.863486 systemd[1]: Reached target sysinit.target - System Initialization. May 12 13:09:43.869414 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 12 13:09:43.870752 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 12 13:09:43.872187 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 12 13:09:43.873638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 12 13:09:43.876183 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 12 13:09:43.876218 systemd[1]: Reached target paths.target - Path Units. May 12 13:09:43.877403 systemd[1]: Reached target time-set.target - System Time Set. May 12 13:09:43.878777 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 12 13:09:43.880232 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 12 13:09:43.881763 systemd[1]: Reached target timers.target - Timer Units. May 12 13:09:43.883788 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 12 13:09:43.886710 systemd[1]: Starting docker.socket - Docker Socket for the API... May 12 13:09:43.892152 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 12 13:09:43.895992 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 12 13:09:43.897303 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 12 13:09:43.902084 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 12 13:09:43.903589 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 12 13:09:43.905725 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 12 13:09:43.907188 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 12 13:09:43.910124 systemd[1]: Reached target sockets.target - Socket Units. May 12 13:09:43.912387 systemd[1]: Reached target basic.target - Basic System. May 12 13:09:43.913482 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 12 13:09:43.913505 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 12 13:09:43.917421 systemd[1]: Starting containerd.service - containerd container runtime... May 12 13:09:43.920486 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 12 13:09:43.923446 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 12 13:09:43.928351 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 12 13:09:43.966213 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 12 13:09:43.968328 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 12 13:09:43.969546 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 12 13:09:43.972409 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 12 13:09:43.973146 jq[1539]: false May 12 13:09:43.975670 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 12 13:09:43.979705 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 12 13:09:43.989452 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 12 13:09:43.994913 systemd[1]: Starting systemd-logind.service - User Login Management... May 12 13:09:43.996853 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 12 13:09:43.997347 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 12 13:09:43.997919 systemd[1]: Starting update-engine.service - Update Engine... May 12 13:09:44.000400 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 12 13:09:44.008296 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing passwd entry cache May 12 13:09:44.006880 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 12 13:09:44.005836 oslogin_cache_refresh[1552]: Refreshing passwd entry cache May 12 13:09:44.008651 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 12 13:09:44.008945 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 12 13:09:44.013706 jq[1561]: true May 12 13:09:44.040046 update_engine[1560]: I20250512 13:09:44.039968 1560 main.cc:92] Flatcar Update Engine starting May 12 13:09:44.040556 extend-filesystems[1551]: Found loop3 May 12 13:09:44.041624 extend-filesystems[1551]: Found loop4 May 12 13:09:44.041624 extend-filesystems[1551]: Found loop5 May 12 13:09:44.041624 extend-filesystems[1551]: Found sr0 May 12 13:09:44.041624 extend-filesystems[1551]: Found vda May 12 13:09:44.041624 extend-filesystems[1551]: Found vda1 May 12 13:09:44.041624 extend-filesystems[1551]: Found vda2 May 12 13:09:44.041624 extend-filesystems[1551]: Found vda3 May 12 13:09:44.041624 extend-filesystems[1551]: Found usr May 12 13:09:44.041624 extend-filesystems[1551]: Found vda4 May 12 13:09:44.041624 extend-filesystems[1551]: Found vda6 May 12 13:09:44.041624 extend-filesystems[1551]: Found vda7 May 12 13:09:44.041624 extend-filesystems[1551]: Found vda9 May 12 13:09:44.041624 extend-filesystems[1551]: Checking size of /dev/vda9 May 12 13:09:44.042587 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 12 13:09:44.057727 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting users, quitting May 12 13:09:44.057727 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 12 13:09:44.057727 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing group entry cache May 12 13:09:44.057727 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting groups, quitting May 12 13:09:44.057727 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 12 13:09:44.043590 oslogin_cache_refresh[1552]: Failure getting users, quitting May 12 13:09:44.042873 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 12 13:09:44.057941 jq[1567]: true May 12 13:09:44.043607 oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 12 13:09:44.051728 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 12 13:09:44.043662 oslogin_cache_refresh[1552]: Refreshing group entry cache May 12 13:09:44.055416 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 12 13:09:44.052295 oslogin_cache_refresh[1552]: Failure getting groups, quitting May 12 13:09:44.055667 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 12 13:09:44.052305 oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 12 13:09:44.061563 systemd[1]: motdgen.service: Deactivated successfully. May 12 13:09:44.062083 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 12 13:09:44.080468 extend-filesystems[1551]: Resized partition /dev/vda9 May 12 13:09:44.080524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:09:44.087230 tar[1565]: linux-amd64/helm May 12 13:09:44.088228 extend-filesystems[1597]: resize2fs 1.47.2 (1-Jan-2025) May 12 13:09:44.103279 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 12 13:09:44.108088 dbus-daemon[1535]: [system] SELinux support is enabled May 12 13:09:44.108342 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 12 13:09:44.112021 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 12 13:09:44.112596 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 12 13:09:44.114397 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 12 13:09:44.114415 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 12 13:09:44.124928 kernel: kvm_amd: TSC scaling supported May 12 13:09:44.124983 kernel: kvm_amd: Nested Virtualization enabled May 12 13:09:44.125019 kernel: kvm_amd: Nested Paging enabled May 12 13:09:44.125579 kernel: kvm_amd: LBR virtualization supported May 12 13:09:44.126323 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 12 13:09:44.127642 kernel: kvm_amd: Virtual GIF supported May 12 13:09:44.143842 systemd[1]: Started update-engine.service - Update Engine. May 12 13:09:44.144769 update_engine[1560]: I20250512 13:09:44.144587 1560 update_check_scheduler.cc:74] Next update check in 7m27s May 12 13:09:44.151492 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 12 13:09:44.161277 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 12 13:09:44.194409 extend-filesystems[1597]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 12 13:09:44.194409 extend-filesystems[1597]: old_desc_blocks = 1, new_desc_blocks = 1 May 12 13:09:44.194409 extend-filesystems[1597]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 12 13:09:44.192827 systemd[1]: extend-filesystems.service: Deactivated successfully. May 12 13:09:44.194988 extend-filesystems[1551]: Resized filesystem in /dev/vda9 May 12 13:09:44.193079 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 12 13:09:44.200867 bash[1607]: Updated "/home/core/.ssh/authorized_keys" May 12 13:09:44.278831 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 12 13:09:44.284284 kernel: EDAC MC: Ver: 3.0.0 May 12 13:09:44.295180 containerd[1572]: time="2025-05-12T13:09:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 12 13:09:44.295789 containerd[1572]: time="2025-05-12T13:09:44.295734347Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 12 13:09:44.299786 systemd-logind[1558]: Watching system buttons on /dev/input/event2 (Power Button) May 12 13:09:44.299812 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 12 13:09:44.310474 containerd[1572]: time="2025-05-12T13:09:44.310422057Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.173µs" May 12 13:09:44.310474 containerd[1572]: time="2025-05-12T13:09:44.310470207Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 12 13:09:44.310540 containerd[1572]: time="2025-05-12T13:09:44.310493290Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 12 13:09:44.310736 containerd[1572]: time="2025-05-12T13:09:44.310704487Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 12 13:09:44.310761 containerd[1572]: time="2025-05-12T13:09:44.310740324Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 12 13:09:44.310799 containerd[1572]: time="2025-05-12T13:09:44.310772845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 13:09:44.310899 containerd[1572]: time="2025-05-12T13:09:44.310867963Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 13:09:44.310899 containerd[1572]: time="2025-05-12T13:09:44.310894523Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 13:09:44.311270 containerd[1572]: time="2025-05-12T13:09:44.311224672Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 13:09:44.311350 containerd[1572]: time="2025-05-12T13:09:44.311321123Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 13:09:44.311484 containerd[1572]: time="2025-05-12T13:09:44.311464642Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 13:09:44.311567 containerd[1572]: time="2025-05-12T13:09:44.311550083Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 12 13:09:44.311748 containerd[1572]: time="2025-05-12T13:09:44.311726724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 12 13:09:44.312091 containerd[1572]: time="2025-05-12T13:09:44.312069277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 13:09:44.312193 containerd[1572]: time="2025-05-12T13:09:44.312173422Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 13:09:44.312281 containerd[1572]: time="2025-05-12T13:09:44.312261267Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 12 13:09:44.312373 containerd[1572]: time="2025-05-12T13:09:44.312357006Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 12 13:09:44.312815 containerd[1572]: time="2025-05-12T13:09:44.312795028Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 12 13:09:44.312962 containerd[1572]: time="2025-05-12T13:09:44.312944618Z" level=info msg="metadata content store policy set" policy=shared May 12 13:09:44.318506 containerd[1572]: time="2025-05-12T13:09:44.318487155Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 12 13:09:44.318585 containerd[1572]: time="2025-05-12T13:09:44.318572365Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 12 13:09:44.318633 containerd[1572]: time="2025-05-12T13:09:44.318621817Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 12 13:09:44.318693 containerd[1572]: time="2025-05-12T13:09:44.318680367Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 12 13:09:44.318739 containerd[1572]: time="2025-05-12T13:09:44.318729179Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 12 13:09:44.318781 containerd[1572]: time="2025-05-12T13:09:44.318771418Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 12 13:09:44.318825 containerd[1572]: time="2025-05-12T13:09:44.318814388Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 12 13:09:44.318867 containerd[1572]: time="2025-05-12T13:09:44.318857249Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 12 13:09:44.318920 containerd[1572]: time="2025-05-12T13:09:44.318909497Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 12 13:09:44.319031 containerd[1572]: time="2025-05-12T13:09:44.319018071Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 12 13:09:44.319074 containerd[1572]: time="2025-05-12T13:09:44.319063416Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 12 13:09:44.319126 containerd[1572]: time="2025-05-12T13:09:44.319107969Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 12 13:09:44.319303 containerd[1572]: time="2025-05-12T13:09:44.319288868Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 12 13:09:44.319358 containerd[1572]: time="2025-05-12T13:09:44.319347308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 12 13:09:44.319405 containerd[1572]: time="2025-05-12T13:09:44.319394517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 12 13:09:44.319447 containerd[1572]: time="2025-05-12T13:09:44.319436876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 12 13:09:44.319501 containerd[1572]: time="2025-05-12T13:09:44.319489294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 12 13:09:44.319542 containerd[1572]: time="2025-05-12T13:09:44.319532175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 12 13:09:44.319593 containerd[1572]: time="2025-05-12T13:09:44.319582309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 12 13:09:44.319643 containerd[1572]: time="2025-05-12T13:09:44.319632794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 12 13:09:44.319693 containerd[1572]: time="2025-05-12T13:09:44.319682096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 12 13:09:44.319736 containerd[1572]: time="2025-05-12T13:09:44.319725868Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 12 13:09:44.319777 containerd[1572]: time="2025-05-12T13:09:44.319766895Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 12 13:09:44.319879 containerd[1572]: time="2025-05-12T13:09:44.319866833Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 12 13:09:44.319928 containerd[1572]: time="2025-05-12T13:09:44.319917347Z" level=info msg="Start snapshots syncer" May 12 13:09:44.319990 containerd[1572]: time="2025-05-12T13:09:44.319979324Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 12 13:09:44.320266 containerd[1572]: time="2025-05-12T13:09:44.320224704Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 12 13:09:44.320397 containerd[1572]: time="2025-05-12T13:09:44.320384774Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 12 13:09:44.321100 containerd[1572]: time="2025-05-12T13:09:44.321084416Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 12 13:09:44.321262 containerd[1572]: time="2025-05-12T13:09:44.321226874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 12 13:09:44.321337 containerd[1572]: time="2025-05-12T13:09:44.321322924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 12 13:09:44.321388 containerd[1572]: time="2025-05-12T13:09:44.321376625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 12 13:09:44.321458 containerd[1572]: time="2025-05-12T13:09:44.321445053Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 12 13:09:44.321512 containerd[1572]: time="2025-05-12T13:09:44.321500166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 12 13:09:44.321562 containerd[1572]: time="2025-05-12T13:09:44.321550691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 12 13:09:44.321611 containerd[1572]: time="2025-05-12T13:09:44.321599793Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 12 13:09:44.321672 containerd[1572]: time="2025-05-12T13:09:44.321660267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 12 13:09:44.321723 containerd[1572]: time="2025-05-12T13:09:44.321711503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 12 13:09:44.321782 containerd[1572]: time="2025-05-12T13:09:44.321769902Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 12 13:09:44.321873 containerd[1572]: time="2025-05-12T13:09:44.321859961Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 13:09:44.321935 containerd[1572]: time="2025-05-12T13:09:44.321919252Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 13:09:44.321984 containerd[1572]: time="2025-05-12T13:09:44.321971200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 13:09:44.322034 containerd[1572]: time="2025-05-12T13:09:44.322020612Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 13:09:44.322079 containerd[1572]: time="2025-05-12T13:09:44.322067931Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 12 13:09:44.322151 containerd[1572]: time="2025-05-12T13:09:44.322137081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 12 13:09:44.322204 containerd[1572]: time="2025-05-12T13:09:44.322192214Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 12 13:09:44.322279 containerd[1572]: time="2025-05-12T13:09:44.322266955Z" level=info msg="runtime interface created" May 12 13:09:44.322324 containerd[1572]: time="2025-05-12T13:09:44.322313622Z" level=info msg="created NRI interface" May 12 13:09:44.322379 containerd[1572]: time="2025-05-12T13:09:44.322366682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 12 13:09:44.322429 containerd[1572]: time="2025-05-12T13:09:44.322418338Z" level=info msg="Connect containerd service" May 12 13:09:44.322494 containerd[1572]: time="2025-05-12T13:09:44.322482789Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 12 13:09:44.323386 containerd[1572]: time="2025-05-12T13:09:44.323363932Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 13:09:44.333420 systemd-logind[1558]: New seat seat0. May 12 13:09:44.353063 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 12 13:09:44.361150 systemd[1]: Started systemd-logind.service - User Login Management. May 12 13:09:44.362986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:09:44.397596 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 12 13:09:44.401046 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 12 13:09:44.427288 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 12 13:09:44.430932 systemd[1]: Starting issuegen.service - Generate /run/issue... May 12 13:09:44.438904 containerd[1572]: time="2025-05-12T13:09:44.438870610Z" level=info msg="Start subscribing containerd event" May 12 13:09:44.439104 containerd[1572]: time="2025-05-12T13:09:44.439078159Z" level=info msg="Start recovering state" May 12 13:09:44.439241 containerd[1572]: time="2025-05-12T13:09:44.439226617Z" level=info msg="Start event monitor" May 12 13:09:44.439329 containerd[1572]: time="2025-05-12T13:09:44.439296909Z" level=info msg="Start cni network conf syncer for default" May 12 13:09:44.439329 containerd[1572]: time="2025-05-12T13:09:44.439311937Z" level=info msg="Start streaming server" May 12 13:09:44.439329 containerd[1572]: time="2025-05-12T13:09:44.439321295Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 12 13:09:44.439329 containerd[1572]: time="2025-05-12T13:09:44.439328248Z" level=info msg="runtime interface starting up..." May 12 13:09:44.439329 containerd[1572]: time="2025-05-12T13:09:44.439333287Z" level=info msg="starting plugins..." May 12 13:09:44.439475 containerd[1572]: time="2025-05-12T13:09:44.439347755Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 12 13:09:44.439475 containerd[1572]: time="2025-05-12T13:09:44.439044676Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 12 13:09:44.439519 containerd[1572]: time="2025-05-12T13:09:44.439490412Z" level=info msg=serving... address=/run/containerd/containerd.sock May 12 13:09:44.439542 containerd[1572]: time="2025-05-12T13:09:44.439534705Z" level=info msg="containerd successfully booted in 0.144872s" May 12 13:09:44.440341 systemd[1]: Started containerd.service - containerd container runtime. May 12 13:09:44.454709 systemd[1]: issuegen.service: Deactivated successfully. May 12 13:09:44.455143 systemd[1]: Finished issuegen.service - Generate /run/issue. May 12 13:09:44.458161 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 12 13:09:44.481639 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 12 13:09:44.485295 systemd[1]: Started getty@tty1.service - Getty on tty1. May 12 13:09:44.488193 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 12 13:09:44.489522 systemd[1]: Reached target getty.target - Login Prompts. May 12 13:09:44.582819 tar[1565]: linux-amd64/LICENSE May 12 13:09:44.582920 tar[1565]: linux-amd64/README.md May 12 13:09:44.605285 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 12 13:09:45.222513 systemd-networkd[1501]: eth0: Gained IPv6LL May 12 13:09:45.226132 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 12 13:09:45.227973 systemd[1]: Reached target network-online.target - Network is Online. May 12 13:09:45.230560 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 12 13:09:45.233099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:09:45.252323 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 12 13:09:45.271105 systemd[1]: coreos-metadata.service: Deactivated successfully. May 12 13:09:45.271457 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 12 13:09:45.273186 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 12 13:09:45.277804 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 12 13:09:45.917157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:09:45.919125 systemd[1]: Reached target multi-user.target - Multi-User System. May 12 13:09:45.920893 systemd[1]: Startup finished in 2.803s (kernel) + 5.826s (initrd) + 4.927s (userspace) = 13.556s. May 12 13:09:45.943379 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 13:09:46.395710 kubelet[1688]: E0512 13:09:46.395638 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 13:09:46.400370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 13:09:46.400570 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 13:09:46.400950 systemd[1]: kubelet.service: Consumed 958ms CPU time, 242.6M memory peak. May 12 13:09:48.245310 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 12 13:09:48.246510 systemd[1]: Started sshd@0-10.0.0.126:22-10.0.0.1:38450.service - OpenSSH per-connection server daemon (10.0.0.1:38450). May 12 13:09:48.322829 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 38450 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:09:48.324689 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:09:48.330920 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 12 13:09:48.331995 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 12 13:09:48.338763 systemd-logind[1558]: New session 1 of user core. May 12 13:09:48.352922 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 12 13:09:48.355839 systemd[1]: Starting user@500.service - User Manager for UID 500... May 12 13:09:48.372609 (systemd)[1706]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 12 13:09:48.375024 systemd-logind[1558]: New session c1 of user core. May 12 13:09:48.524990 systemd[1706]: Queued start job for default target default.target. May 12 13:09:48.548438 systemd[1706]: Created slice app.slice - User Application Slice. May 12 13:09:48.548462 systemd[1706]: Reached target paths.target - Paths. May 12 13:09:48.548499 systemd[1706]: Reached target timers.target - Timers. May 12 13:09:48.549920 systemd[1706]: Starting dbus.socket - D-Bus User Message Bus Socket... May 12 13:09:48.559657 systemd[1706]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 12 13:09:48.559711 systemd[1706]: Reached target sockets.target - Sockets. May 12 13:09:48.559745 systemd[1706]: Reached target basic.target - Basic System. May 12 13:09:48.559787 systemd[1706]: Reached target default.target - Main User Target. May 12 13:09:48.559816 systemd[1706]: Startup finished in 178ms. May 12 13:09:48.560210 systemd[1]: Started user@500.service - User Manager for UID 500. May 12 13:09:48.561675 systemd[1]: Started session-1.scope - Session 1 of User core. May 12 13:09:48.631758 systemd[1]: Started sshd@1-10.0.0.126:22-10.0.0.1:38462.service - OpenSSH per-connection server daemon (10.0.0.1:38462). May 12 13:09:48.685617 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 38462 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:09:48.686933 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:09:48.690765 systemd-logind[1558]: New session 2 of user core. May 12 13:09:48.709349 systemd[1]: Started session-2.scope - Session 2 of User core. May 12 13:09:48.761625 sshd[1719]: Connection closed by 10.0.0.1 port 38462 May 12 13:09:48.761937 sshd-session[1717]: pam_unix(sshd:session): session closed for user core May 12 13:09:48.773737 systemd[1]: sshd@1-10.0.0.126:22-10.0.0.1:38462.service: Deactivated successfully. May 12 13:09:48.775479 systemd[1]: session-2.scope: Deactivated successfully. May 12 13:09:48.776253 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. May 12 13:09:48.778796 systemd[1]: Started sshd@2-10.0.0.126:22-10.0.0.1:38464.service - OpenSSH per-connection server daemon (10.0.0.1:38464). May 12 13:09:48.779352 systemd-logind[1558]: Removed session 2. May 12 13:09:48.839498 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 38464 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:09:48.840801 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:09:48.844865 systemd-logind[1558]: New session 3 of user core. May 12 13:09:48.854361 systemd[1]: Started session-3.scope - Session 3 of User core. May 12 13:09:48.902587 sshd[1728]: Connection closed by 10.0.0.1 port 38464 May 12 13:09:48.902888 sshd-session[1725]: pam_unix(sshd:session): session closed for user core May 12 13:09:48.918496 systemd[1]: sshd@2-10.0.0.126:22-10.0.0.1:38464.service: Deactivated successfully. May 12 13:09:48.920268 systemd[1]: session-3.scope: Deactivated successfully. May 12 13:09:48.921038 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. May 12 13:09:48.923501 systemd[1]: Started sshd@3-10.0.0.126:22-10.0.0.1:38478.service - OpenSSH per-connection server daemon (10.0.0.1:38478). May 12 13:09:48.924075 systemd-logind[1558]: Removed session 3. May 12 13:09:48.988714 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 38478 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:09:48.990229 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:09:48.994033 systemd-logind[1558]: New session 4 of user core. May 12 13:09:49.006359 systemd[1]: Started session-4.scope - Session 4 of User core. May 12 13:09:49.058560 sshd[1736]: Connection closed by 10.0.0.1 port 38478 May 12 13:09:49.059683 sshd-session[1734]: pam_unix(sshd:session): session closed for user core May 12 13:09:49.067384 systemd[1]: sshd@3-10.0.0.126:22-10.0.0.1:38478.service: Deactivated successfully. May 12 13:09:49.068919 systemd[1]: session-4.scope: Deactivated successfully. May 12 13:09:49.069730 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. May 12 13:09:49.072157 systemd[1]: Started sshd@4-10.0.0.126:22-10.0.0.1:38486.service - OpenSSH per-connection server daemon (10.0.0.1:38486). May 12 13:09:49.072817 systemd-logind[1558]: Removed session 4. May 12 13:09:49.136960 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 38486 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:09:49.138289 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:09:49.142136 systemd-logind[1558]: New session 5 of user core. May 12 13:09:49.152354 systemd[1]: Started session-5.scope - Session 5 of User core. May 12 13:09:49.207374 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 12 13:09:49.207671 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:09:49.222893 sudo[1745]: pam_unix(sudo:session): session closed for user root May 12 13:09:49.224257 sshd[1744]: Connection closed by 10.0.0.1 port 38486 May 12 13:09:49.224592 sshd-session[1742]: pam_unix(sshd:session): session closed for user core May 12 13:09:49.236471 systemd[1]: sshd@4-10.0.0.126:22-10.0.0.1:38486.service: Deactivated successfully. May 12 13:09:49.237997 systemd[1]: session-5.scope: Deactivated successfully. May 12 13:09:49.238740 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. May 12 13:09:49.242004 systemd[1]: Started sshd@5-10.0.0.126:22-10.0.0.1:38502.service - OpenSSH per-connection server daemon (10.0.0.1:38502). May 12 13:09:49.242566 systemd-logind[1558]: Removed session 5. May 12 13:09:49.294521 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 38502 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:09:49.295793 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:09:49.299851 systemd-logind[1558]: New session 6 of user core. May 12 13:09:49.309360 systemd[1]: Started session-6.scope - Session 6 of User core. May 12 13:09:49.361701 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 12 13:09:49.361998 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:09:49.519881 sudo[1755]: pam_unix(sudo:session): session closed for user root May 12 13:09:49.526053 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 12 13:09:49.526369 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:09:49.537745 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 13:09:49.588590 augenrules[1777]: No rules May 12 13:09:49.590178 systemd[1]: audit-rules.service: Deactivated successfully. May 12 13:09:49.590460 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 13:09:49.591498 sudo[1754]: pam_unix(sudo:session): session closed for user root May 12 13:09:49.592884 sshd[1753]: Connection closed by 10.0.0.1 port 38502 May 12 13:09:49.593161 sshd-session[1751]: pam_unix(sshd:session): session closed for user core May 12 13:09:49.600934 systemd[1]: sshd@5-10.0.0.126:22-10.0.0.1:38502.service: Deactivated successfully. May 12 13:09:49.602932 systemd[1]: session-6.scope: Deactivated successfully. May 12 13:09:49.603686 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. May 12 13:09:49.606487 systemd[1]: Started sshd@6-10.0.0.126:22-10.0.0.1:38514.service - OpenSSH per-connection server daemon (10.0.0.1:38514). May 12 13:09:49.607033 systemd-logind[1558]: Removed session 6. May 12 13:09:49.660917 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 38514 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:09:49.662224 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:09:49.666435 systemd-logind[1558]: New session 7 of user core. May 12 13:09:49.676358 systemd[1]: Started session-7.scope - Session 7 of User core. May 12 13:09:49.727305 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 12 13:09:49.727604 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:09:50.017643 systemd[1]: Starting docker.service - Docker Application Container Engine... May 12 13:09:50.031681 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 12 13:09:50.288710 dockerd[1809]: time="2025-05-12T13:09:50.288585352Z" level=info msg="Starting up" May 12 13:09:50.289666 dockerd[1809]: time="2025-05-12T13:09:50.289631775Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 12 13:09:51.388998 dockerd[1809]: time="2025-05-12T13:09:51.388948609Z" level=info msg="Loading containers: start." May 12 13:09:51.455279 kernel: Initializing XFRM netlink socket May 12 13:09:51.836301 systemd-networkd[1501]: docker0: Link UP May 12 13:09:51.841828 dockerd[1809]: time="2025-05-12T13:09:51.841790882Z" level=info msg="Loading containers: done." May 12 13:09:51.855772 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4120472635-merged.mount: Deactivated successfully. May 12 13:09:51.857565 dockerd[1809]: time="2025-05-12T13:09:51.857532419Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 12 13:09:51.857619 dockerd[1809]: time="2025-05-12T13:09:51.857604123Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 12 13:09:51.857722 dockerd[1809]: time="2025-05-12T13:09:51.857703620Z" level=info msg="Initializing buildkit" May 12 13:09:51.887093 dockerd[1809]: time="2025-05-12T13:09:51.887058421Z" level=info msg="Completed buildkit initialization" May 12 13:09:51.890814 dockerd[1809]: time="2025-05-12T13:09:51.890788989Z" level=info msg="Daemon has completed initialization" May 12 13:09:51.890902 dockerd[1809]: time="2025-05-12T13:09:51.890851095Z" level=info msg="API listen on /run/docker.sock" May 12 13:09:51.890995 systemd[1]: Started docker.service - Docker Application Container Engine. May 12 13:09:52.676264 containerd[1572]: time="2025-05-12T13:09:52.676202870Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 12 13:09:53.286036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762229725.mount: Deactivated successfully. May 12 13:09:54.847455 containerd[1572]: time="2025-05-12T13:09:54.847396048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:54.850356 containerd[1572]: time="2025-05-12T13:09:54.850292181Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 12 13:09:54.853618 containerd[1572]: time="2025-05-12T13:09:54.853585628Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:54.860259 containerd[1572]: time="2025-05-12T13:09:54.860207880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:54.861096 containerd[1572]: time="2025-05-12T13:09:54.861039339Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.184793729s" May 12 13:09:54.861096 containerd[1572]: time="2025-05-12T13:09:54.861092459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 12 13:09:54.881274 containerd[1572]: time="2025-05-12T13:09:54.881216416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 12 13:09:56.630512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 12 13:09:56.632056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:09:56.812135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:09:56.834545 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 13:09:56.884050 kubelet[2101]: E0512 13:09:56.883902 2101 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 13:09:56.890583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 13:09:56.890819 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 13:09:56.891234 systemd[1]: kubelet.service: Consumed 206ms CPU time, 95.5M memory peak. May 12 13:09:58.326887 containerd[1572]: time="2025-05-12T13:09:58.326820003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:58.327556 containerd[1572]: time="2025-05-12T13:09:58.327490611Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 12 13:09:58.328835 containerd[1572]: time="2025-05-12T13:09:58.328779188Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:58.331196 containerd[1572]: time="2025-05-12T13:09:58.331150456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:58.332025 containerd[1572]: time="2025-05-12T13:09:58.331951258Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 3.450678186s" May 12 13:09:58.332025 containerd[1572]: time="2025-05-12T13:09:58.332020237Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 12 13:09:58.352594 containerd[1572]: time="2025-05-12T13:09:58.352552340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 12 13:09:59.340832 containerd[1572]: time="2025-05-12T13:09:59.340772348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:59.341645 containerd[1572]: time="2025-05-12T13:09:59.341595081Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 12 13:09:59.342696 containerd[1572]: time="2025-05-12T13:09:59.342641714Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:59.345119 containerd[1572]: time="2025-05-12T13:09:59.345089255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:09:59.346204 containerd[1572]: time="2025-05-12T13:09:59.346064013Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 993.475907ms" May 12 13:09:59.346204 containerd[1572]: time="2025-05-12T13:09:59.346096274Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 12 13:09:59.365957 containerd[1572]: time="2025-05-12T13:09:59.365915980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 12 13:10:00.356632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029238120.mount: Deactivated successfully. May 12 13:10:00.939557 containerd[1572]: time="2025-05-12T13:10:00.939493351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:00.940296 containerd[1572]: time="2025-05-12T13:10:00.940238199Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 12 13:10:00.941496 containerd[1572]: time="2025-05-12T13:10:00.941460752Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:00.943678 containerd[1572]: time="2025-05-12T13:10:00.943627155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:00.944109 containerd[1572]: time="2025-05-12T13:10:00.944082329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.578132696s" May 12 13:10:00.944142 containerd[1572]: time="2025-05-12T13:10:00.944109891Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 12 13:10:00.962127 containerd[1572]: time="2025-05-12T13:10:00.962086630Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 12 13:10:01.480396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120573316.mount: Deactivated successfully. May 12 13:10:02.096887 containerd[1572]: time="2025-05-12T13:10:02.096822320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:02.097597 containerd[1572]: time="2025-05-12T13:10:02.097542330Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 12 13:10:02.098714 containerd[1572]: time="2025-05-12T13:10:02.098662922Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:02.100834 containerd[1572]: time="2025-05-12T13:10:02.100799279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:02.101964 containerd[1572]: time="2025-05-12T13:10:02.101920432Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.139795691s" May 12 13:10:02.101964 containerd[1572]: time="2025-05-12T13:10:02.101956570Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 12 13:10:02.121353 containerd[1572]: time="2025-05-12T13:10:02.121323617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 12 13:10:02.595500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2240426413.mount: Deactivated successfully. May 12 13:10:02.601955 containerd[1572]: time="2025-05-12T13:10:02.601900452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:02.602660 containerd[1572]: time="2025-05-12T13:10:02.602602228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 12 13:10:02.603750 containerd[1572]: time="2025-05-12T13:10:02.603707251Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:02.605647 containerd[1572]: time="2025-05-12T13:10:02.605607686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:02.606241 containerd[1572]: time="2025-05-12T13:10:02.606205086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 484.845782ms" May 12 13:10:02.606291 containerd[1572]: time="2025-05-12T13:10:02.606237227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 12 13:10:02.624286 containerd[1572]: time="2025-05-12T13:10:02.623931566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 12 13:10:03.136538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491079932.mount: Deactivated successfully. May 12 13:10:05.155208 containerd[1572]: time="2025-05-12T13:10:05.155151812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:05.155947 containerd[1572]: time="2025-05-12T13:10:05.155912619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 12 13:10:05.157202 containerd[1572]: time="2025-05-12T13:10:05.157163315Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:05.159454 containerd[1572]: time="2025-05-12T13:10:05.159399920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:05.160340 containerd[1572]: time="2025-05-12T13:10:05.160310157Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.535885797s" May 12 13:10:05.160376 containerd[1572]: time="2025-05-12T13:10:05.160347207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 12 13:10:07.130512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 12 13:10:07.132009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:10:07.301895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:10:07.316530 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 13:10:07.355162 kubelet[2366]: E0512 13:10:07.355083 2366 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 13:10:07.359049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 13:10:07.359296 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 13:10:07.359669 systemd[1]: kubelet.service: Consumed 178ms CPU time, 96.6M memory peak. May 12 13:10:07.548113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:10:07.548338 systemd[1]: kubelet.service: Consumed 178ms CPU time, 96.6M memory peak. May 12 13:10:07.550555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:10:07.567263 systemd[1]: Reload requested from client PID 2380 ('systemctl') (unit session-7.scope)... May 12 13:10:07.567276 systemd[1]: Reloading... May 12 13:10:07.656298 zram_generator::config[2433]: No configuration found. May 12 13:10:08.082864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:10:08.195915 systemd[1]: Reloading finished in 628 ms. May 12 13:10:08.249908 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 12 13:10:08.250000 systemd[1]: kubelet.service: Failed with result 'signal'. May 12 13:10:08.250333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:10:08.250375 systemd[1]: kubelet.service: Consumed 124ms CPU time, 83.6M memory peak. May 12 13:10:08.251947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:10:08.395077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:10:08.399959 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 13:10:08.440270 kubelet[2472]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:10:08.440270 kubelet[2472]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 13:10:08.440270 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:10:08.440668 kubelet[2472]: I0512 13:10:08.440298 2472 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 13:10:08.720159 kubelet[2472]: I0512 13:10:08.720074 2472 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 12 13:10:08.720159 kubelet[2472]: I0512 13:10:08.720099 2472 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 13:10:08.720339 kubelet[2472]: I0512 13:10:08.720292 2472 server.go:927] "Client rotation is on, will bootstrap in background" May 12 13:10:08.734649 kubelet[2472]: I0512 13:10:08.734625 2472 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 13:10:08.735277 kubelet[2472]: E0512 13:10:08.735166 2472 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.746167 kubelet[2472]: I0512 13:10:08.746128 2472 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 13:10:08.747767 kubelet[2472]: I0512 13:10:08.747733 2472 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 13:10:08.747912 kubelet[2472]: I0512 13:10:08.747759 2472 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 12 13:10:08.748290 kubelet[2472]: I0512 13:10:08.748267 2472 topology_manager.go:138] "Creating topology manager with none policy" May 12 13:10:08.748290 kubelet[2472]: I0512 13:10:08.748282 2472 container_manager_linux.go:301] "Creating device plugin manager" May 12 13:10:08.748422 kubelet[2472]: I0512 13:10:08.748404 2472 state_mem.go:36] "Initialized new in-memory state store" May 12 13:10:08.748989 kubelet[2472]: I0512 13:10:08.748970 2472 kubelet.go:400] "Attempting to sync node with API server" May 12 13:10:08.748989 kubelet[2472]: I0512 13:10:08.748985 2472 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 13:10:08.749031 kubelet[2472]: I0512 13:10:08.749003 2472 kubelet.go:312] "Adding apiserver pod source" May 12 13:10:08.749031 kubelet[2472]: I0512 13:10:08.749016 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 13:10:08.749592 kubelet[2472]: W0512 13:10:08.749546 2472 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.749620 kubelet[2472]: E0512 13:10:08.749598 2472 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.750843 kubelet[2472]: W0512 13:10:08.750804 2472 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.750873 kubelet[2472]: E0512 13:10:08.750845 2472 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.752933 kubelet[2472]: I0512 13:10:08.752913 2472 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 12 13:10:08.754059 kubelet[2472]: I0512 13:10:08.754035 2472 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 13:10:08.754101 kubelet[2472]: W0512 13:10:08.754080 2472 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 12 13:10:08.754715 kubelet[2472]: I0512 13:10:08.754699 2472 server.go:1264] "Started kubelet" May 12 13:10:08.754799 kubelet[2472]: I0512 13:10:08.754764 2472 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 13:10:08.754967 kubelet[2472]: I0512 13:10:08.754915 2472 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 13:10:08.755288 kubelet[2472]: I0512 13:10:08.755240 2472 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 13:10:08.755585 kubelet[2472]: I0512 13:10:08.755554 2472 server.go:455] "Adding debug handlers to kubelet server" May 12 13:10:08.756599 kubelet[2472]: I0512 13:10:08.756574 2472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 13:10:08.757839 kubelet[2472]: E0512 13:10:08.757721 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:08.757839 kubelet[2472]: I0512 13:10:08.757757 2472 volume_manager.go:291] "Starting Kubelet Volume Manager" May 12 13:10:08.757839 kubelet[2472]: I0512 13:10:08.757827 2472 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 13:10:08.757926 kubelet[2472]: I0512 13:10:08.757872 2472 reconciler.go:26] "Reconciler: start to sync state" May 12 13:10:08.758194 kubelet[2472]: W0512 13:10:08.758152 2472 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.758269 kubelet[2472]: E0512 13:10:08.758201 2472 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.759111 kubelet[2472]: E0512 13:10:08.758495 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="200ms" May 12 13:10:08.759111 kubelet[2472]: I0512 13:10:08.758861 2472 factory.go:221] Registration of the systemd container factory successfully May 12 13:10:08.759111 kubelet[2472]: I0512 13:10:08.758923 2472 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 13:10:08.759597 kubelet[2472]: E0512 13:10:08.759582 2472 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 13:10:08.759955 kubelet[2472]: I0512 13:10:08.759941 2472 factory.go:221] Registration of the containerd container factory successfully May 12 13:10:08.760373 kubelet[2472]: E0512 13:10:08.760268 2472 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.126:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.126:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ec9a351bf64b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 13:10:08.75468101 +0000 UTC m=+0.350629121,LastTimestamp:2025-05-12 13:10:08.75468101 +0000 UTC m=+0.350629121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 13:10:08.772334 kubelet[2472]: I0512 13:10:08.771898 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 13:10:08.773259 kubelet[2472]: I0512 13:10:08.773220 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 13:10:08.773304 kubelet[2472]: I0512 13:10:08.773276 2472 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 13:10:08.773328 kubelet[2472]: I0512 13:10:08.773294 2472 kubelet.go:2337] "Starting kubelet main sync loop" May 12 13:10:08.773478 kubelet[2472]: E0512 13:10:08.773456 2472 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 13:10:08.773902 kubelet[2472]: W0512 13:10:08.773854 2472 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.773943 kubelet[2472]: E0512 13:10:08.773910 2472 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:08.774658 kubelet[2472]: I0512 13:10:08.774640 2472 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 13:10:08.774658 kubelet[2472]: I0512 13:10:08.774654 2472 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 13:10:08.774710 kubelet[2472]: I0512 13:10:08.774677 2472 state_mem.go:36] "Initialized new in-memory state store" May 12 13:10:08.859361 kubelet[2472]: I0512 13:10:08.859340 2472 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:10:08.859659 kubelet[2472]: E0512 13:10:08.859629 2472 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" May 12 13:10:08.873756 kubelet[2472]: E0512 13:10:08.873727 2472 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 12 13:10:08.959751 kubelet[2472]: E0512 13:10:08.959710 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="400ms" May 12 13:10:09.060877 kubelet[2472]: I0512 13:10:09.060814 2472 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:10:09.061070 kubelet[2472]: E0512 13:10:09.061040 2472 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" May 12 13:10:09.074169 kubelet[2472]: E0512 13:10:09.074145 2472 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 12 13:10:09.083133 kubelet[2472]: I0512 13:10:09.083113 2472 policy_none.go:49] "None policy: Start" May 12 13:10:09.083858 kubelet[2472]: I0512 13:10:09.083556 2472 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 13:10:09.083858 kubelet[2472]: I0512 13:10:09.083581 2472 state_mem.go:35] "Initializing new in-memory state store" May 12 13:10:09.091121 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 12 13:10:09.104259 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 12 13:10:09.107421 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 12 13:10:09.127090 kubelet[2472]: I0512 13:10:09.127008 2472 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 13:10:09.127227 kubelet[2472]: I0512 13:10:09.127194 2472 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 13:10:09.127369 kubelet[2472]: I0512 13:10:09.127314 2472 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 13:10:09.128385 kubelet[2472]: E0512 13:10:09.128360 2472 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 12 13:10:09.360588 kubelet[2472]: E0512 13:10:09.360463 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="800ms" May 12 13:10:09.462932 kubelet[2472]: I0512 13:10:09.462901 2472 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:10:09.463300 kubelet[2472]: E0512 13:10:09.463208 2472 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" May 12 13:10:09.474308 kubelet[2472]: I0512 13:10:09.474276 2472 topology_manager.go:215] "Topology Admit Handler" podUID="9341cbf18e12bba570c18a4f37068a36" podNamespace="kube-system" podName="kube-apiserver-localhost" May 12 13:10:09.474988 kubelet[2472]: I0512 13:10:09.474942 2472 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 12 13:10:09.475687 kubelet[2472]: I0512 13:10:09.475660 2472 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 12 13:10:09.481172 systemd[1]: Created slice kubepods-burstable-pod9341cbf18e12bba570c18a4f37068a36.slice - libcontainer container kubepods-burstable-pod9341cbf18e12bba570c18a4f37068a36.slice. May 12 13:10:09.503908 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 12 13:10:09.519752 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 12 13:10:09.563580 kubelet[2472]: I0512 13:10:09.563540 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9341cbf18e12bba570c18a4f37068a36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9341cbf18e12bba570c18a4f37068a36\") " pod="kube-system/kube-apiserver-localhost" May 12 13:10:09.563580 kubelet[2472]: I0512 13:10:09.563577 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:09.563674 kubelet[2472]: I0512 13:10:09.563601 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:09.563674 kubelet[2472]: I0512 13:10:09.563622 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 12 13:10:09.563674 kubelet[2472]: I0512 13:10:09.563641 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9341cbf18e12bba570c18a4f37068a36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9341cbf18e12bba570c18a4f37068a36\") " pod="kube-system/kube-apiserver-localhost" May 12 13:10:09.563674 kubelet[2472]: I0512 13:10:09.563662 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:09.563770 kubelet[2472]: I0512 13:10:09.563682 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:09.563770 kubelet[2472]: I0512 13:10:09.563700 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:09.563770 kubelet[2472]: I0512 13:10:09.563717 2472 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9341cbf18e12bba570c18a4f37068a36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9341cbf18e12bba570c18a4f37068a36\") " pod="kube-system/kube-apiserver-localhost" May 12 13:10:09.803870 containerd[1572]: time="2025-05-12T13:10:09.803818151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9341cbf18e12bba570c18a4f37068a36,Namespace:kube-system,Attempt:0,}" May 12 13:10:09.818297 containerd[1572]: time="2025-05-12T13:10:09.818272333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 12 13:10:09.821723 containerd[1572]: time="2025-05-12T13:10:09.821679364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 12 13:10:09.905672 kubelet[2472]: W0512 13:10:09.905607 2472 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:09.905672 kubelet[2472]: E0512 13:10:09.905665 2472 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:10.041628 kubelet[2472]: W0512 13:10:10.041578 2472 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:10.041628 kubelet[2472]: E0512 13:10:10.041624 2472 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:10.142460 kubelet[2472]: W0512 13:10:10.142409 2472 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:10.142460 kubelet[2472]: E0512 13:10:10.142461 2472 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:10.161028 kubelet[2472]: E0512 13:10:10.160986 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="1.6s" May 12 13:10:10.264700 kubelet[2472]: I0512 13:10:10.264675 2472 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:10:10.264967 kubelet[2472]: E0512 13:10:10.264932 2472 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" May 12 13:10:10.319625 kubelet[2472]: W0512 13:10:10.319572 2472 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:10.319687 kubelet[2472]: E0512 13:10:10.319628 2472 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused May 12 13:10:10.427061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660367443.mount: Deactivated successfully. May 12 13:10:10.434791 containerd[1572]: time="2025-05-12T13:10:10.434761761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:10:10.437919 containerd[1572]: time="2025-05-12T13:10:10.437895228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 12 13:10:10.438851 containerd[1572]: time="2025-05-12T13:10:10.438797962Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:10:10.440527 containerd[1572]: time="2025-05-12T13:10:10.440493793Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:10:10.441348 containerd[1572]: time="2025-05-12T13:10:10.441310474Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 12 13:10:10.442428 containerd[1572]: time="2025-05-12T13:10:10.442370984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:10:10.443228 containerd[1572]: time="2025-05-12T13:10:10.443202533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 12 13:10:10.444158 containerd[1572]: time="2025-05-12T13:10:10.444135233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:10:10.444828 containerd[1572]: time="2025-05-12T13:10:10.444786124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 638.088111ms" May 12 13:10:10.448617 containerd[1572]: time="2025-05-12T13:10:10.448580992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 628.212537ms" May 12 13:10:10.448928 containerd[1572]: time="2025-05-12T13:10:10.448907364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 622.889132ms" May 12 13:10:10.471408 containerd[1572]: time="2025-05-12T13:10:10.471357945Z" level=info msg="connecting to shim 2cabf8b031dc49a7e8022abceda551bf0e218efea712ad72b57aee773a8ba8c1" address="unix:///run/containerd/s/1221ae1bf74f6e98cd2e59d538c6e7a2cc413a3b1a49d31010433293dc1546c4" namespace=k8s.io protocol=ttrpc version=3 May 12 13:10:10.489190 containerd[1572]: time="2025-05-12T13:10:10.489098631Z" level=info msg="connecting to shim 91b701e53c16d3c7d9461c029d800e030aaafed4ad7d7ac4c0d99cb542c128c5" address="unix:///run/containerd/s/15154ef930d6b4704a59689de91b4b85a9793eb86d6654193ecb5b9175997b68" namespace=k8s.io protocol=ttrpc version=3 May 12 13:10:10.492308 containerd[1572]: time="2025-05-12T13:10:10.492195400Z" level=info msg="connecting to shim 39edccc27741eb297d64e8b0124643402904c4c76a9053b1a70306c1a6cb0bb9" address="unix:///run/containerd/s/a3718ed1beab6a7cd965a0c89439db23d8b04d44795954387e8b0da855cefcb5" namespace=k8s.io protocol=ttrpc version=3 May 12 13:10:10.497503 systemd[1]: Started cri-containerd-2cabf8b031dc49a7e8022abceda551bf0e218efea712ad72b57aee773a8ba8c1.scope - libcontainer container 2cabf8b031dc49a7e8022abceda551bf0e218efea712ad72b57aee773a8ba8c1. May 12 13:10:10.510376 systemd[1]: Started cri-containerd-91b701e53c16d3c7d9461c029d800e030aaafed4ad7d7ac4c0d99cb542c128c5.scope - libcontainer container 91b701e53c16d3c7d9461c029d800e030aaafed4ad7d7ac4c0d99cb542c128c5. May 12 13:10:10.515545 systemd[1]: Started cri-containerd-39edccc27741eb297d64e8b0124643402904c4c76a9053b1a70306c1a6cb0bb9.scope - libcontainer container 39edccc27741eb297d64e8b0124643402904c4c76a9053b1a70306c1a6cb0bb9. May 12 13:10:10.557954 containerd[1572]: time="2025-05-12T13:10:10.557842485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"91b701e53c16d3c7d9461c029d800e030aaafed4ad7d7ac4c0d99cb542c128c5\"" May 12 13:10:10.563293 containerd[1572]: time="2025-05-12T13:10:10.563269044Z" level=info msg="CreateContainer within sandbox \"91b701e53c16d3c7d9461c029d800e030aaafed4ad7d7ac4c0d99cb542c128c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 12 13:10:10.566994 containerd[1572]: time="2025-05-12T13:10:10.566958174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9341cbf18e12bba570c18a4f37068a36,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cabf8b031dc49a7e8022abceda551bf0e218efea712ad72b57aee773a8ba8c1\"" May 12 13:10:10.568347 containerd[1572]: time="2025-05-12T13:10:10.568308687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"39edccc27741eb297d64e8b0124643402904c4c76a9053b1a70306c1a6cb0bb9\"" May 12 13:10:10.569511 containerd[1572]: time="2025-05-12T13:10:10.569477449Z" level=info msg="CreateContainer within sandbox \"2cabf8b031dc49a7e8022abceda551bf0e218efea712ad72b57aee773a8ba8c1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 12 13:10:10.570770 containerd[1572]: time="2025-05-12T13:10:10.570747050Z" level=info msg="CreateContainer within sandbox \"39edccc27741eb297d64e8b0124643402904c4c76a9053b1a70306c1a6cb0bb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 12 13:10:10.583553 containerd[1572]: time="2025-05-12T13:10:10.583520009Z" level=info msg="Container 26e3150c3dc005fd82a81b5d1075a2399dd9de8481c954aa6a28afdb469b5d2c: CDI devices from CRI Config.CDIDevices: []" May 12 13:10:10.586102 containerd[1572]: time="2025-05-12T13:10:10.586069060Z" level=info msg="Container 37e3b54a7534eb89a8d1c3eaad328988dc4cbc59d5d9444288a6b8e970814daf: CDI devices from CRI Config.CDIDevices: []" May 12 13:10:10.593016 containerd[1572]: time="2025-05-12T13:10:10.592977168Z" level=info msg="CreateContainer within sandbox \"2cabf8b031dc49a7e8022abceda551bf0e218efea712ad72b57aee773a8ba8c1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26e3150c3dc005fd82a81b5d1075a2399dd9de8481c954aa6a28afdb469b5d2c\"" May 12 13:10:10.593487 containerd[1572]: time="2025-05-12T13:10:10.593465404Z" level=info msg="StartContainer for \"26e3150c3dc005fd82a81b5d1075a2399dd9de8481c954aa6a28afdb469b5d2c\"" May 12 13:10:10.594440 containerd[1572]: time="2025-05-12T13:10:10.594418932Z" level=info msg="connecting to shim 26e3150c3dc005fd82a81b5d1075a2399dd9de8481c954aa6a28afdb469b5d2c" address="unix:///run/containerd/s/1221ae1bf74f6e98cd2e59d538c6e7a2cc413a3b1a49d31010433293dc1546c4" protocol=ttrpc version=3 May 12 13:10:10.596559 containerd[1572]: time="2025-05-12T13:10:10.596269473Z" level=info msg="Container e4d6e5a561985be77237f0e81bdd9736e56d13450eced1b8b5bb8bcae9d4fbc3: CDI devices from CRI Config.CDIDevices: []" May 12 13:10:10.599764 containerd[1572]: time="2025-05-12T13:10:10.599742548Z" level=info msg="CreateContainer within sandbox \"91b701e53c16d3c7d9461c029d800e030aaafed4ad7d7ac4c0d99cb542c128c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"37e3b54a7534eb89a8d1c3eaad328988dc4cbc59d5d9444288a6b8e970814daf\"" May 12 13:10:10.600149 containerd[1572]: time="2025-05-12T13:10:10.600130926Z" level=info msg="StartContainer for \"37e3b54a7534eb89a8d1c3eaad328988dc4cbc59d5d9444288a6b8e970814daf\"" May 12 13:10:10.601023 containerd[1572]: time="2025-05-12T13:10:10.601003323Z" level=info msg="connecting to shim 37e3b54a7534eb89a8d1c3eaad328988dc4cbc59d5d9444288a6b8e970814daf" address="unix:///run/containerd/s/15154ef930d6b4704a59689de91b4b85a9793eb86d6654193ecb5b9175997b68" protocol=ttrpc version=3 May 12 13:10:10.604488 containerd[1572]: time="2025-05-12T13:10:10.604458914Z" level=info msg="CreateContainer within sandbox \"39edccc27741eb297d64e8b0124643402904c4c76a9053b1a70306c1a6cb0bb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e4d6e5a561985be77237f0e81bdd9736e56d13450eced1b8b5bb8bcae9d4fbc3\"" May 12 13:10:10.605661 containerd[1572]: time="2025-05-12T13:10:10.605501820Z" level=info msg="StartContainer for \"e4d6e5a561985be77237f0e81bdd9736e56d13450eced1b8b5bb8bcae9d4fbc3\"" May 12 13:10:10.606647 containerd[1572]: time="2025-05-12T13:10:10.606618916Z" level=info msg="connecting to shim e4d6e5a561985be77237f0e81bdd9736e56d13450eced1b8b5bb8bcae9d4fbc3" address="unix:///run/containerd/s/a3718ed1beab6a7cd965a0c89439db23d8b04d44795954387e8b0da855cefcb5" protocol=ttrpc version=3 May 12 13:10:10.615486 systemd[1]: Started cri-containerd-26e3150c3dc005fd82a81b5d1075a2399dd9de8481c954aa6a28afdb469b5d2c.scope - libcontainer container 26e3150c3dc005fd82a81b5d1075a2399dd9de8481c954aa6a28afdb469b5d2c. May 12 13:10:10.623356 systemd[1]: Started cri-containerd-37e3b54a7534eb89a8d1c3eaad328988dc4cbc59d5d9444288a6b8e970814daf.scope - libcontainer container 37e3b54a7534eb89a8d1c3eaad328988dc4cbc59d5d9444288a6b8e970814daf. May 12 13:10:10.626756 systemd[1]: Started cri-containerd-e4d6e5a561985be77237f0e81bdd9736e56d13450eced1b8b5bb8bcae9d4fbc3.scope - libcontainer container e4d6e5a561985be77237f0e81bdd9736e56d13450eced1b8b5bb8bcae9d4fbc3. May 12 13:10:10.672225 containerd[1572]: time="2025-05-12T13:10:10.672185871Z" level=info msg="StartContainer for \"26e3150c3dc005fd82a81b5d1075a2399dd9de8481c954aa6a28afdb469b5d2c\" returns successfully" May 12 13:10:10.682352 containerd[1572]: time="2025-05-12T13:10:10.681604798Z" level=info msg="StartContainer for \"e4d6e5a561985be77237f0e81bdd9736e56d13450eced1b8b5bb8bcae9d4fbc3\" returns successfully" May 12 13:10:10.689726 containerd[1572]: time="2025-05-12T13:10:10.689690544Z" level=info msg="StartContainer for \"37e3b54a7534eb89a8d1c3eaad328988dc4cbc59d5d9444288a6b8e970814daf\" returns successfully" May 12 13:10:11.764479 kubelet[2472]: E0512 13:10:11.764406 2472 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 12 13:10:11.867022 kubelet[2472]: I0512 13:10:11.866967 2472 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:10:11.874695 kubelet[2472]: I0512 13:10:11.874665 2472 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 12 13:10:11.880933 kubelet[2472]: E0512 13:10:11.880905 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:11.981777 kubelet[2472]: E0512 13:10:11.981699 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.082715 kubelet[2472]: E0512 13:10:12.082589 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.183464 kubelet[2472]: E0512 13:10:12.183344 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.284162 kubelet[2472]: E0512 13:10:12.284099 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.384810 kubelet[2472]: E0512 13:10:12.384760 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.485205 kubelet[2472]: E0512 13:10:12.485162 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.585770 kubelet[2472]: E0512 13:10:12.585724 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.685955 kubelet[2472]: E0512 13:10:12.685847 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.786220 kubelet[2472]: E0512 13:10:12.786182 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.887289 kubelet[2472]: E0512 13:10:12.887233 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:12.987958 kubelet[2472]: E0512 13:10:12.987848 2472 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:10:13.752006 kubelet[2472]: I0512 13:10:13.751947 2472 apiserver.go:52] "Watching apiserver" May 12 13:10:13.758771 kubelet[2472]: I0512 13:10:13.758728 2472 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 13:10:13.831526 systemd[1]: Reload requested from client PID 2754 ('systemctl') (unit session-7.scope)... May 12 13:10:13.831543 systemd[1]: Reloading... May 12 13:10:13.914286 zram_generator::config[2797]: No configuration found. May 12 13:10:14.005736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:10:14.131618 systemd[1]: Reloading finished in 299 ms. May 12 13:10:14.159151 kubelet[2472]: I0512 13:10:14.159046 2472 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 13:10:14.159155 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:10:14.175761 systemd[1]: kubelet.service: Deactivated successfully. May 12 13:10:14.176048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:10:14.176103 systemd[1]: kubelet.service: Consumed 713ms CPU time, 114.4M memory peak. May 12 13:10:14.178065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:10:14.355180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:10:14.359734 (kubelet)[2842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 13:10:14.402774 kubelet[2842]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:10:14.402774 kubelet[2842]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 13:10:14.402774 kubelet[2842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:10:14.403365 kubelet[2842]: I0512 13:10:14.402827 2842 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 13:10:14.408939 kubelet[2842]: I0512 13:10:14.408899 2842 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 12 13:10:14.408939 kubelet[2842]: I0512 13:10:14.408927 2842 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 13:10:14.409132 kubelet[2842]: I0512 13:10:14.409112 2842 server.go:927] "Client rotation is on, will bootstrap in background" May 12 13:10:14.410440 kubelet[2842]: I0512 13:10:14.410413 2842 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 12 13:10:14.411555 kubelet[2842]: I0512 13:10:14.411525 2842 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 13:10:14.419431 kubelet[2842]: I0512 13:10:14.419408 2842 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 13:10:14.419680 kubelet[2842]: I0512 13:10:14.419621 2842 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 13:10:14.419807 kubelet[2842]: I0512 13:10:14.419661 2842 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 12 13:10:14.419884 kubelet[2842]: I0512 13:10:14.419814 2842 topology_manager.go:138] "Creating topology manager with none policy" May 12 13:10:14.419884 kubelet[2842]: I0512 13:10:14.419824 2842 container_manager_linux.go:301] "Creating device plugin manager" May 12 13:10:14.419884 kubelet[2842]: I0512 13:10:14.419869 2842 state_mem.go:36] "Initialized new in-memory state store" May 12 13:10:14.419950 kubelet[2842]: I0512 13:10:14.419939 2842 kubelet.go:400] "Attempting to sync node with API server" May 12 13:10:14.419975 kubelet[2842]: I0512 13:10:14.419957 2842 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 13:10:14.419997 kubelet[2842]: I0512 13:10:14.419986 2842 kubelet.go:312] "Adding apiserver pod source" May 12 13:10:14.420020 kubelet[2842]: I0512 13:10:14.420003 2842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 13:10:14.421132 kubelet[2842]: I0512 13:10:14.420560 2842 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 12 13:10:14.421132 kubelet[2842]: I0512 13:10:14.420780 2842 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 13:10:14.421229 kubelet[2842]: I0512 13:10:14.421213 2842 server.go:1264] "Started kubelet" May 12 13:10:14.423881 kubelet[2842]: I0512 13:10:14.423853 2842 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 13:10:14.426570 kubelet[2842]: I0512 13:10:14.425604 2842 volume_manager.go:291] "Starting Kubelet Volume Manager" May 12 13:10:14.429802 kubelet[2842]: I0512 13:10:14.429328 2842 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 13:10:14.429802 kubelet[2842]: I0512 13:10:14.429491 2842 reconciler.go:26] "Reconciler: start to sync state" May 12 13:10:14.430754 kubelet[2842]: I0512 13:10:14.430708 2842 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 13:10:14.431758 kubelet[2842]: I0512 13:10:14.431735 2842 server.go:455] "Adding debug handlers to kubelet server" May 12 13:10:14.433265 kubelet[2842]: I0512 13:10:14.432366 2842 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 13:10:14.433265 kubelet[2842]: I0512 13:10:14.432612 2842 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 13:10:14.439397 kubelet[2842]: I0512 13:10:14.436753 2842 factory.go:221] Registration of the systemd container factory successfully May 12 13:10:14.439397 kubelet[2842]: I0512 13:10:14.436824 2842 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 13:10:14.440838 kubelet[2842]: I0512 13:10:14.440585 2842 factory.go:221] Registration of the containerd container factory successfully May 12 13:10:14.440838 kubelet[2842]: E0512 13:10:14.440674 2842 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 13:10:14.448202 kubelet[2842]: I0512 13:10:14.448091 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 13:10:14.449396 kubelet[2842]: I0512 13:10:14.449359 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 13:10:14.449396 kubelet[2842]: I0512 13:10:14.449395 2842 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 13:10:14.449474 kubelet[2842]: I0512 13:10:14.449414 2842 kubelet.go:2337] "Starting kubelet main sync loop" May 12 13:10:14.449474 kubelet[2842]: E0512 13:10:14.449452 2842 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 13:10:14.473334 kubelet[2842]: I0512 13:10:14.473290 2842 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 13:10:14.473334 kubelet[2842]: I0512 13:10:14.473316 2842 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 13:10:14.473334 kubelet[2842]: I0512 13:10:14.473338 2842 state_mem.go:36] "Initialized new in-memory state store" May 12 13:10:14.473515 kubelet[2842]: I0512 13:10:14.473484 2842 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 12 13:10:14.473515 kubelet[2842]: I0512 13:10:14.473503 2842 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 12 13:10:14.473570 kubelet[2842]: I0512 13:10:14.473522 2842 policy_none.go:49] "None policy: Start" May 12 13:10:14.473988 kubelet[2842]: I0512 13:10:14.473961 2842 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 13:10:14.473988 kubelet[2842]: I0512 13:10:14.473987 2842 state_mem.go:35] "Initializing new in-memory state store" May 12 13:10:14.474143 kubelet[2842]: I0512 13:10:14.474137 2842 state_mem.go:75] "Updated machine memory state" May 12 13:10:14.480723 kubelet[2842]: I0512 13:10:14.480626 2842 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 13:10:14.480839 kubelet[2842]: I0512 13:10:14.480798 2842 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 13:10:14.481923 kubelet[2842]: I0512 13:10:14.480908 2842 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 13:10:14.527517 kubelet[2842]: I0512 13:10:14.527486 2842 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:10:14.536638 kubelet[2842]: I0512 13:10:14.536600 2842 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 12 13:10:14.536711 kubelet[2842]: I0512 13:10:14.536679 2842 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 12 13:10:14.549885 kubelet[2842]: I0512 13:10:14.549848 2842 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 12 13:10:14.549941 kubelet[2842]: I0512 13:10:14.549921 2842 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 12 13:10:14.549993 kubelet[2842]: I0512 13:10:14.549974 2842 topology_manager.go:215] "Topology Admit Handler" podUID="9341cbf18e12bba570c18a4f37068a36" podNamespace="kube-system" podName="kube-apiserver-localhost" May 12 13:10:14.630596 kubelet[2842]: I0512 13:10:14.630549 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9341cbf18e12bba570c18a4f37068a36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9341cbf18e12bba570c18a4f37068a36\") " pod="kube-system/kube-apiserver-localhost" May 12 13:10:14.630596 kubelet[2842]: I0512 13:10:14.630582 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9341cbf18e12bba570c18a4f37068a36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9341cbf18e12bba570c18a4f37068a36\") " pod="kube-system/kube-apiserver-localhost" May 12 13:10:14.630596 kubelet[2842]: I0512 13:10:14.630602 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:14.630787 kubelet[2842]: I0512 13:10:14.630615 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:14.630787 kubelet[2842]: I0512 13:10:14.630631 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:14.630787 kubelet[2842]: I0512 13:10:14.630644 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:14.630787 kubelet[2842]: I0512 13:10:14.630658 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 12 13:10:14.630787 kubelet[2842]: I0512 13:10:14.630670 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:10:14.630914 kubelet[2842]: I0512 13:10:14.630688 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9341cbf18e12bba570c18a4f37068a36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9341cbf18e12bba570c18a4f37068a36\") " pod="kube-system/kube-apiserver-localhost" May 12 13:10:15.420561 kubelet[2842]: I0512 13:10:15.420486 2842 apiserver.go:52] "Watching apiserver" May 12 13:10:15.430360 kubelet[2842]: I0512 13:10:15.430330 2842 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 13:10:15.471281 kubelet[2842]: E0512 13:10:15.470497 2842 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 12 13:10:15.540520 kubelet[2842]: I0512 13:10:15.540456 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.540421609 podStartE2EDuration="1.540421609s" podCreationTimestamp="2025-05-12 13:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:10:15.538164706 +0000 UTC m=+1.174512200" watchObservedRunningTime="2025-05-12 13:10:15.540421609 +0000 UTC m=+1.176769104" May 12 13:10:15.540801 kubelet[2842]: I0512 13:10:15.540580 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.54057628 podStartE2EDuration="1.54057628s" podCreationTimestamp="2025-05-12 13:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:10:15.523557808 +0000 UTC m=+1.159905302" watchObservedRunningTime="2025-05-12 13:10:15.54057628 +0000 UTC m=+1.176923764" May 12 13:10:15.551765 kubelet[2842]: I0512 13:10:15.551685 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.551668525 podStartE2EDuration="1.551668525s" podCreationTimestamp="2025-05-12 13:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:10:15.551221577 +0000 UTC m=+1.187569071" watchObservedRunningTime="2025-05-12 13:10:15.551668525 +0000 UTC m=+1.188016020" May 12 13:10:18.969869 sudo[1789]: pam_unix(sudo:session): session closed for user root May 12 13:10:18.971174 sshd[1788]: Connection closed by 10.0.0.1 port 38514 May 12 13:10:18.971600 sshd-session[1786]: pam_unix(sshd:session): session closed for user core May 12 13:10:18.976498 systemd[1]: sshd@6-10.0.0.126:22-10.0.0.1:38514.service: Deactivated successfully. May 12 13:10:18.978839 systemd[1]: session-7.scope: Deactivated successfully. May 12 13:10:18.979093 systemd[1]: session-7.scope: Consumed 4.312s CPU time, 236.9M memory peak. May 12 13:10:18.980478 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. May 12 13:10:18.981839 systemd-logind[1558]: Removed session 7. May 12 13:10:28.962312 kubelet[2842]: I0512 13:10:28.962264 2842 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 12 13:10:28.962814 containerd[1572]: time="2025-05-12T13:10:28.962651559Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 12 13:10:28.963066 kubelet[2842]: I0512 13:10:28.962825 2842 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 12 13:10:29.301896 kubelet[2842]: I0512 13:10:29.301764 2842 topology_manager.go:215] "Topology Admit Handler" podUID="bd8ba21e-6769-421f-9a2e-84a18df76762" podNamespace="kube-system" podName="kube-proxy-p5m5b" May 12 13:10:29.309070 systemd[1]: Created slice kubepods-besteffort-podbd8ba21e_6769_421f_9a2e_84a18df76762.slice - libcontainer container kubepods-besteffort-podbd8ba21e_6769_421f_9a2e_84a18df76762.slice. May 12 13:10:29.325315 kubelet[2842]: I0512 13:10:29.325263 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd8ba21e-6769-421f-9a2e-84a18df76762-xtables-lock\") pod \"kube-proxy-p5m5b\" (UID: \"bd8ba21e-6769-421f-9a2e-84a18df76762\") " pod="kube-system/kube-proxy-p5m5b" May 12 13:10:29.325315 kubelet[2842]: I0512 13:10:29.325311 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4xc\" (UniqueName: \"kubernetes.io/projected/bd8ba21e-6769-421f-9a2e-84a18df76762-kube-api-access-tq4xc\") pod \"kube-proxy-p5m5b\" (UID: \"bd8ba21e-6769-421f-9a2e-84a18df76762\") " pod="kube-system/kube-proxy-p5m5b" May 12 13:10:29.325471 kubelet[2842]: I0512 13:10:29.325332 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd8ba21e-6769-421f-9a2e-84a18df76762-kube-proxy\") pod \"kube-proxy-p5m5b\" (UID: \"bd8ba21e-6769-421f-9a2e-84a18df76762\") " pod="kube-system/kube-proxy-p5m5b" May 12 13:10:29.325471 kubelet[2842]: I0512 13:10:29.325345 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd8ba21e-6769-421f-9a2e-84a18df76762-lib-modules\") pod \"kube-proxy-p5m5b\" (UID: \"bd8ba21e-6769-421f-9a2e-84a18df76762\") " pod="kube-system/kube-proxy-p5m5b" May 12 13:10:29.374337 update_engine[1560]: I20250512 13:10:29.374280 1560 update_attempter.cc:509] Updating boot flags... May 12 13:10:29.592215 kubelet[2842]: I0512 13:10:29.591932 2842 topology_manager.go:215] "Topology Admit Handler" podUID="f06de66e-6a27-4981-8454-ea8d791f2797" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-h4png" May 12 13:10:29.613029 systemd[1]: Created slice kubepods-besteffort-podf06de66e_6a27_4981_8454_ea8d791f2797.slice - libcontainer container kubepods-besteffort-podf06de66e_6a27_4981_8454_ea8d791f2797.slice. May 12 13:10:29.620868 containerd[1572]: time="2025-05-12T13:10:29.620825221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p5m5b,Uid:bd8ba21e-6769-421f-9a2e-84a18df76762,Namespace:kube-system,Attempt:0,}" May 12 13:10:29.629079 kubelet[2842]: I0512 13:10:29.629050 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f06de66e-6a27-4981-8454-ea8d791f2797-var-lib-calico\") pod \"tigera-operator-797db67f8-h4png\" (UID: \"f06de66e-6a27-4981-8454-ea8d791f2797\") " pod="tigera-operator/tigera-operator-797db67f8-h4png" May 12 13:10:29.629146 kubelet[2842]: I0512 13:10:29.629090 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg9q7\" (UniqueName: \"kubernetes.io/projected/f06de66e-6a27-4981-8454-ea8d791f2797-kube-api-access-gg9q7\") pod \"tigera-operator-797db67f8-h4png\" (UID: \"f06de66e-6a27-4981-8454-ea8d791f2797\") " pod="tigera-operator/tigera-operator-797db67f8-h4png" May 12 13:10:29.656578 containerd[1572]: time="2025-05-12T13:10:29.656532995Z" level=info msg="connecting to shim 941b225ecc2bcce24091358303b39ba3f780927e6cb95e69f3871d6558b8f9f7" address="unix:///run/containerd/s/828d4178d390004fb36a85459279d6fb94842e3ed2cc91910b526d0c643202de" namespace=k8s.io protocol=ttrpc version=3 May 12 13:10:29.707384 systemd[1]: Started cri-containerd-941b225ecc2bcce24091358303b39ba3f780927e6cb95e69f3871d6558b8f9f7.scope - libcontainer container 941b225ecc2bcce24091358303b39ba3f780927e6cb95e69f3871d6558b8f9f7. May 12 13:10:29.730218 containerd[1572]: time="2025-05-12T13:10:29.730183224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p5m5b,Uid:bd8ba21e-6769-421f-9a2e-84a18df76762,Namespace:kube-system,Attempt:0,} returns sandbox id \"941b225ecc2bcce24091358303b39ba3f780927e6cb95e69f3871d6558b8f9f7\"" May 12 13:10:29.732638 containerd[1572]: time="2025-05-12T13:10:29.732595072Z" level=info msg="CreateContainer within sandbox \"941b225ecc2bcce24091358303b39ba3f780927e6cb95e69f3871d6558b8f9f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 12 13:10:29.743008 containerd[1572]: time="2025-05-12T13:10:29.742970278Z" level=info msg="Container 1749fb0b1304a336ef552a11ebc4b9e6c7c36e35de78c5e39e2c768b7097c80a: CDI devices from CRI Config.CDIDevices: []" May 12 13:10:29.751178 containerd[1572]: time="2025-05-12T13:10:29.751135437Z" level=info msg="CreateContainer within sandbox \"941b225ecc2bcce24091358303b39ba3f780927e6cb95e69f3871d6558b8f9f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1749fb0b1304a336ef552a11ebc4b9e6c7c36e35de78c5e39e2c768b7097c80a\"" May 12 13:10:29.751703 containerd[1572]: time="2025-05-12T13:10:29.751655384Z" level=info msg="StartContainer for \"1749fb0b1304a336ef552a11ebc4b9e6c7c36e35de78c5e39e2c768b7097c80a\"" May 12 13:10:29.752942 containerd[1572]: time="2025-05-12T13:10:29.752915817Z" level=info msg="connecting to shim 1749fb0b1304a336ef552a11ebc4b9e6c7c36e35de78c5e39e2c768b7097c80a" address="unix:///run/containerd/s/828d4178d390004fb36a85459279d6fb94842e3ed2cc91910b526d0c643202de" protocol=ttrpc version=3 May 12 13:10:29.777391 systemd[1]: Started cri-containerd-1749fb0b1304a336ef552a11ebc4b9e6c7c36e35de78c5e39e2c768b7097c80a.scope - libcontainer container 1749fb0b1304a336ef552a11ebc4b9e6c7c36e35de78c5e39e2c768b7097c80a. May 12 13:10:29.822279 containerd[1572]: time="2025-05-12T13:10:29.821352780Z" level=info msg="StartContainer for \"1749fb0b1304a336ef552a11ebc4b9e6c7c36e35de78c5e39e2c768b7097c80a\" returns successfully" May 12 13:10:29.916578 containerd[1572]: time="2025-05-12T13:10:29.916539622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-h4png,Uid:f06de66e-6a27-4981-8454-ea8d791f2797,Namespace:tigera-operator,Attempt:0,}" May 12 13:10:29.935312 containerd[1572]: time="2025-05-12T13:10:29.935263505Z" level=info msg="connecting to shim 2886344a0514e59862fa13ce31cf7d3bc95edc00c2775f6ae7e7e610bf286cf6" address="unix:///run/containerd/s/3e3d02c9d3d7e40ba73fa3bb52941bfc60e2f33863127ae87bbe798274b50345" namespace=k8s.io protocol=ttrpc version=3 May 12 13:10:29.968384 systemd[1]: Started cri-containerd-2886344a0514e59862fa13ce31cf7d3bc95edc00c2775f6ae7e7e610bf286cf6.scope - libcontainer container 2886344a0514e59862fa13ce31cf7d3bc95edc00c2775f6ae7e7e610bf286cf6. May 12 13:10:30.014264 containerd[1572]: time="2025-05-12T13:10:30.014204890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-h4png,Uid:f06de66e-6a27-4981-8454-ea8d791f2797,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2886344a0514e59862fa13ce31cf7d3bc95edc00c2775f6ae7e7e610bf286cf6\"" May 12 13:10:30.015997 containerd[1572]: time="2025-05-12T13:10:30.015970480Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 12 13:10:30.456259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144771066.mount: Deactivated successfully. May 12 13:10:31.467764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585176101.mount: Deactivated successfully. May 12 13:10:31.753031 containerd[1572]: time="2025-05-12T13:10:31.752927197Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:31.754103 containerd[1572]: time="2025-05-12T13:10:31.754072988Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 12 13:10:31.755076 containerd[1572]: time="2025-05-12T13:10:31.755047927Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:31.757018 containerd[1572]: time="2025-05-12T13:10:31.756976342Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:31.757562 containerd[1572]: time="2025-05-12T13:10:31.757532265Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.741528212s" May 12 13:10:31.757562 containerd[1572]: time="2025-05-12T13:10:31.757560109Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 12 13:10:31.759328 containerd[1572]: time="2025-05-12T13:10:31.759287062Z" level=info msg="CreateContainer within sandbox \"2886344a0514e59862fa13ce31cf7d3bc95edc00c2775f6ae7e7e610bf286cf6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 12 13:10:31.768393 containerd[1572]: time="2025-05-12T13:10:31.768350281Z" level=info msg="Container 32e7aff051c293738c9b91d9fe7707d247fed5222c35106d2e7a643469f57432: CDI devices from CRI Config.CDIDevices: []" May 12 13:10:31.775994 containerd[1572]: time="2025-05-12T13:10:31.775954566Z" level=info msg="CreateContainer within sandbox \"2886344a0514e59862fa13ce31cf7d3bc95edc00c2775f6ae7e7e610bf286cf6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"32e7aff051c293738c9b91d9fe7707d247fed5222c35106d2e7a643469f57432\"" May 12 13:10:31.776374 containerd[1572]: time="2025-05-12T13:10:31.776326942Z" level=info msg="StartContainer for \"32e7aff051c293738c9b91d9fe7707d247fed5222c35106d2e7a643469f57432\"" May 12 13:10:31.777061 containerd[1572]: time="2025-05-12T13:10:31.777033912Z" level=info msg="connecting to shim 32e7aff051c293738c9b91d9fe7707d247fed5222c35106d2e7a643469f57432" address="unix:///run/containerd/s/3e3d02c9d3d7e40ba73fa3bb52941bfc60e2f33863127ae87bbe798274b50345" protocol=ttrpc version=3 May 12 13:10:31.804376 systemd[1]: Started cri-containerd-32e7aff051c293738c9b91d9fe7707d247fed5222c35106d2e7a643469f57432.scope - libcontainer container 32e7aff051c293738c9b91d9fe7707d247fed5222c35106d2e7a643469f57432. May 12 13:10:31.834301 containerd[1572]: time="2025-05-12T13:10:31.834240771Z" level=info msg="StartContainer for \"32e7aff051c293738c9b91d9fe7707d247fed5222c35106d2e7a643469f57432\" returns successfully" May 12 13:10:32.509270 kubelet[2842]: I0512 13:10:32.509201 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p5m5b" podStartSLOduration=3.509166358 podStartE2EDuration="3.509166358s" podCreationTimestamp="2025-05-12 13:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:10:30.504664791 +0000 UTC m=+16.141012285" watchObservedRunningTime="2025-05-12 13:10:32.509166358 +0000 UTC m=+18.145513852" May 12 13:10:34.770523 kubelet[2842]: I0512 13:10:34.770029 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-h4png" podStartSLOduration=4.027210649 podStartE2EDuration="5.770006655s" podCreationTimestamp="2025-05-12 13:10:29 +0000 UTC" firstStartedPulling="2025-05-12 13:10:30.015353369 +0000 UTC m=+15.651700863" lastFinishedPulling="2025-05-12 13:10:31.758149375 +0000 UTC m=+17.394496869" observedRunningTime="2025-05-12 13:10:32.509439005 +0000 UTC m=+18.145786499" watchObservedRunningTime="2025-05-12 13:10:34.770006655 +0000 UTC m=+20.406354149" May 12 13:10:34.771840 kubelet[2842]: I0512 13:10:34.771491 2842 topology_manager.go:215] "Topology Admit Handler" podUID="5607229d-0e38-4f9d-9cc2-81045daa8a64" podNamespace="calico-system" podName="calico-typha-5d59b46f79-k2f8k" May 12 13:10:34.783380 systemd[1]: Created slice kubepods-besteffort-pod5607229d_0e38_4f9d_9cc2_81045daa8a64.slice - libcontainer container kubepods-besteffort-pod5607229d_0e38_4f9d_9cc2_81045daa8a64.slice. May 12 13:10:34.820267 kubelet[2842]: I0512 13:10:34.820172 2842 topology_manager.go:215] "Topology Admit Handler" podUID="0e84c619-c67c-42d8-bae2-8a28ab38e7f0" podNamespace="calico-system" podName="calico-node-bv8dm" May 12 13:10:34.829442 systemd[1]: Created slice kubepods-besteffort-pod0e84c619_c67c_42d8_bae2_8a28ab38e7f0.slice - libcontainer container kubepods-besteffort-pod0e84c619_c67c_42d8_bae2_8a28ab38e7f0.slice. May 12 13:10:34.865506 kubelet[2842]: I0512 13:10:34.865458 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-tigera-ca-bundle\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865506 kubelet[2842]: I0512 13:10:34.865502 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-policysync\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865506 kubelet[2842]: I0512 13:10:34.865518 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-xtables-lock\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865686 kubelet[2842]: I0512 13:10:34.865532 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5607229d-0e38-4f9d-9cc2-81045daa8a64-typha-certs\") pod \"calico-typha-5d59b46f79-k2f8k\" (UID: \"5607229d-0e38-4f9d-9cc2-81045daa8a64\") " pod="calico-system/calico-typha-5d59b46f79-k2f8k" May 12 13:10:34.865686 kubelet[2842]: I0512 13:10:34.865549 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5607229d-0e38-4f9d-9cc2-81045daa8a64-tigera-ca-bundle\") pod \"calico-typha-5d59b46f79-k2f8k\" (UID: \"5607229d-0e38-4f9d-9cc2-81045daa8a64\") " pod="calico-system/calico-typha-5d59b46f79-k2f8k" May 12 13:10:34.865686 kubelet[2842]: I0512 13:10:34.865568 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-node-certs\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865686 kubelet[2842]: I0512 13:10:34.865601 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-var-run-calico\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865686 kubelet[2842]: I0512 13:10:34.865617 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-cni-bin-dir\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865844 kubelet[2842]: I0512 13:10:34.865633 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-flexvol-driver-host\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865844 kubelet[2842]: I0512 13:10:34.865648 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4fpt\" (UniqueName: \"kubernetes.io/projected/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-kube-api-access-t4fpt\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865844 kubelet[2842]: I0512 13:10:34.865664 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4ht8\" (UniqueName: \"kubernetes.io/projected/5607229d-0e38-4f9d-9cc2-81045daa8a64-kube-api-access-g4ht8\") pod \"calico-typha-5d59b46f79-k2f8k\" (UID: \"5607229d-0e38-4f9d-9cc2-81045daa8a64\") " pod="calico-system/calico-typha-5d59b46f79-k2f8k" May 12 13:10:34.865844 kubelet[2842]: I0512 13:10:34.865678 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-var-lib-calico\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865844 kubelet[2842]: I0512 13:10:34.865698 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-cni-net-dir\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865946 kubelet[2842]: I0512 13:10:34.865712 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-cni-log-dir\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.865946 kubelet[2842]: I0512 13:10:34.865734 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e84c619-c67c-42d8-bae2-8a28ab38e7f0-lib-modules\") pod \"calico-node-bv8dm\" (UID: \"0e84c619-c67c-42d8-bae2-8a28ab38e7f0\") " pod="calico-system/calico-node-bv8dm" May 12 13:10:34.932470 kubelet[2842]: I0512 13:10:34.931728 2842 topology_manager.go:215] "Topology Admit Handler" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" podNamespace="calico-system" podName="csi-node-driver-mgnmw" May 12 13:10:34.932470 kubelet[2842]: E0512 13:10:34.931963 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:34.966494 kubelet[2842]: I0512 13:10:34.966449 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1c46340b-2b7a-4015-8ce6-2f3287662c95-varrun\") pod \"csi-node-driver-mgnmw\" (UID: \"1c46340b-2b7a-4015-8ce6-2f3287662c95\") " pod="calico-system/csi-node-driver-mgnmw" May 12 13:10:34.966494 kubelet[2842]: I0512 13:10:34.966498 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c46340b-2b7a-4015-8ce6-2f3287662c95-kubelet-dir\") pod \"csi-node-driver-mgnmw\" (UID: \"1c46340b-2b7a-4015-8ce6-2f3287662c95\") " pod="calico-system/csi-node-driver-mgnmw" May 12 13:10:34.966656 kubelet[2842]: I0512 13:10:34.966531 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h7t5\" (UniqueName: \"kubernetes.io/projected/1c46340b-2b7a-4015-8ce6-2f3287662c95-kube-api-access-2h7t5\") pod \"csi-node-driver-mgnmw\" (UID: \"1c46340b-2b7a-4015-8ce6-2f3287662c95\") " pod="calico-system/csi-node-driver-mgnmw" May 12 13:10:34.967357 kubelet[2842]: I0512 13:10:34.967314 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1c46340b-2b7a-4015-8ce6-2f3287662c95-socket-dir\") pod \"csi-node-driver-mgnmw\" (UID: \"1c46340b-2b7a-4015-8ce6-2f3287662c95\") " pod="calico-system/csi-node-driver-mgnmw" May 12 13:10:34.967423 kubelet[2842]: I0512 13:10:34.967370 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1c46340b-2b7a-4015-8ce6-2f3287662c95-registration-dir\") pod \"csi-node-driver-mgnmw\" (UID: \"1c46340b-2b7a-4015-8ce6-2f3287662c95\") " pod="calico-system/csi-node-driver-mgnmw" May 12 13:10:34.972453 kubelet[2842]: E0512 13:10:34.972353 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:34.972453 kubelet[2842]: W0512 13:10:34.972380 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:34.972453 kubelet[2842]: E0512 13:10:34.972407 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:34.981882 kubelet[2842]: E0512 13:10:34.981704 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:34.981882 kubelet[2842]: W0512 13:10:34.981727 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:34.981882 kubelet[2842]: E0512 13:10:34.981749 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:34.984514 kubelet[2842]: E0512 13:10:34.984498 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:34.984621 kubelet[2842]: W0512 13:10:34.984608 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:34.984672 kubelet[2842]: E0512 13:10:34.984661 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:34.995027 kubelet[2842]: E0512 13:10:34.995007 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:34.995148 kubelet[2842]: W0512 13:10:34.995135 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:34.995202 kubelet[2842]: E0512 13:10:34.995190 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:34.995961 kubelet[2842]: E0512 13:10:34.995937 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:34.995961 kubelet[2842]: W0512 13:10:34.995951 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:34.995961 kubelet[2842]: E0512 13:10:34.995961 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.068645 kubelet[2842]: E0512 13:10:35.068543 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.068645 kubelet[2842]: W0512 13:10:35.068566 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.068645 kubelet[2842]: E0512 13:10:35.068617 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.069220 kubelet[2842]: E0512 13:10:35.068997 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.069220 kubelet[2842]: W0512 13:10:35.069023 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.069220 kubelet[2842]: E0512 13:10:35.069055 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.069412 kubelet[2842]: E0512 13:10:35.069389 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.069412 kubelet[2842]: W0512 13:10:35.069399 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.069412 kubelet[2842]: E0512 13:10:35.069414 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.069677 kubelet[2842]: E0512 13:10:35.069609 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.069677 kubelet[2842]: W0512 13:10:35.069617 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.069677 kubelet[2842]: E0512 13:10:35.069635 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.069872 kubelet[2842]: E0512 13:10:35.069858 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.069872 kubelet[2842]: W0512 13:10:35.069868 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.069922 kubelet[2842]: E0512 13:10:35.069881 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.070159 kubelet[2842]: E0512 13:10:35.070137 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.070159 kubelet[2842]: W0512 13:10:35.070154 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.070390 kubelet[2842]: E0512 13:10:35.070189 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.070390 kubelet[2842]: E0512 13:10:35.070350 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.070390 kubelet[2842]: W0512 13:10:35.070358 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.070497 kubelet[2842]: E0512 13:10:35.070409 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.070542 kubelet[2842]: E0512 13:10:35.070518 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.070542 kubelet[2842]: W0512 13:10:35.070531 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.070644 kubelet[2842]: E0512 13:10:35.070546 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.070735 kubelet[2842]: E0512 13:10:35.070719 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.070735 kubelet[2842]: W0512 13:10:35.070730 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.070856 kubelet[2842]: E0512 13:10:35.070842 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.070904 kubelet[2842]: E0512 13:10:35.070886 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.070904 kubelet[2842]: W0512 13:10:35.070892 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.070956 kubelet[2842]: E0512 13:10:35.070926 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.071043 kubelet[2842]: E0512 13:10:35.071028 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.071043 kubelet[2842]: W0512 13:10:35.071038 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.071128 kubelet[2842]: E0512 13:10:35.071100 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.071237 kubelet[2842]: E0512 13:10:35.071220 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.071237 kubelet[2842]: W0512 13:10:35.071231 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.071335 kubelet[2842]: E0512 13:10:35.071304 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.071545 kubelet[2842]: E0512 13:10:35.071530 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.071545 kubelet[2842]: W0512 13:10:35.071542 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.071592 kubelet[2842]: E0512 13:10:35.071555 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.071782 kubelet[2842]: E0512 13:10:35.071767 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.071782 kubelet[2842]: W0512 13:10:35.071778 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.071952 kubelet[2842]: E0512 13:10:35.071925 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.072002 kubelet[2842]: E0512 13:10:35.071986 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.072002 kubelet[2842]: W0512 13:10:35.071996 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.072122 kubelet[2842]: E0512 13:10:35.072028 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.072172 kubelet[2842]: E0512 13:10:35.072130 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.072172 kubelet[2842]: W0512 13:10:35.072137 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.072233 kubelet[2842]: E0512 13:10:35.072220 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.072675 kubelet[2842]: E0512 13:10:35.072431 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.072675 kubelet[2842]: W0512 13:10:35.072442 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.072675 kubelet[2842]: E0512 13:10:35.072474 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.072675 kubelet[2842]: E0512 13:10:35.072662 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.072675 kubelet[2842]: W0512 13:10:35.072669 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.072797 kubelet[2842]: E0512 13:10:35.072738 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.072884 kubelet[2842]: E0512 13:10:35.072864 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.072884 kubelet[2842]: W0512 13:10:35.072877 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.072957 kubelet[2842]: E0512 13:10:35.072898 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.073148 kubelet[2842]: E0512 13:10:35.073133 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.073148 kubelet[2842]: W0512 13:10:35.073145 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.073211 kubelet[2842]: E0512 13:10:35.073161 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.073411 kubelet[2842]: E0512 13:10:35.073394 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.073411 kubelet[2842]: W0512 13:10:35.073406 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.073460 kubelet[2842]: E0512 13:10:35.073421 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.073619 kubelet[2842]: E0512 13:10:35.073602 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.073619 kubelet[2842]: W0512 13:10:35.073616 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.073671 kubelet[2842]: E0512 13:10:35.073629 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.073805 kubelet[2842]: E0512 13:10:35.073791 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.073805 kubelet[2842]: W0512 13:10:35.073801 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.073862 kubelet[2842]: E0512 13:10:35.073809 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.073975 kubelet[2842]: E0512 13:10:35.073959 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.073975 kubelet[2842]: W0512 13:10:35.073969 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.074023 kubelet[2842]: E0512 13:10:35.073977 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.074175 kubelet[2842]: E0512 13:10:35.074159 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.074175 kubelet[2842]: W0512 13:10:35.074171 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.074231 kubelet[2842]: E0512 13:10:35.074180 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.081343 kubelet[2842]: E0512 13:10:35.081310 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:35.081343 kubelet[2842]: W0512 13:10:35.081333 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:35.081425 kubelet[2842]: E0512 13:10:35.081352 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:35.086869 containerd[1572]: time="2025-05-12T13:10:35.086834415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d59b46f79-k2f8k,Uid:5607229d-0e38-4f9d-9cc2-81045daa8a64,Namespace:calico-system,Attempt:0,}" May 12 13:10:35.111757 containerd[1572]: time="2025-05-12T13:10:35.111716588Z" level=info msg="connecting to shim 91bb9a18b73244c0a491766ea9afac0ceb7d440bd631a3f03bbf1386058bea44" address="unix:///run/containerd/s/2c19528a8fda56874e41ac07512da24cd119b9e51ba4dcf40cdd732fafbaf951" namespace=k8s.io protocol=ttrpc version=3 May 12 13:10:35.132892 containerd[1572]: time="2025-05-12T13:10:35.132824124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bv8dm,Uid:0e84c619-c67c-42d8-bae2-8a28ab38e7f0,Namespace:calico-system,Attempt:0,}" May 12 13:10:35.138473 systemd[1]: Started cri-containerd-91bb9a18b73244c0a491766ea9afac0ceb7d440bd631a3f03bbf1386058bea44.scope - libcontainer container 91bb9a18b73244c0a491766ea9afac0ceb7d440bd631a3f03bbf1386058bea44. May 12 13:10:35.155579 containerd[1572]: time="2025-05-12T13:10:35.155520333Z" level=info msg="connecting to shim 49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1" address="unix:///run/containerd/s/f5b582b32d740913b4376f8caff8c565a0d3bb109c03e7ead292141753cc8427" namespace=k8s.io protocol=ttrpc version=3 May 12 13:10:35.180121 systemd[1]: Started cri-containerd-49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1.scope - libcontainer container 49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1. May 12 13:10:35.188272 containerd[1572]: time="2025-05-12T13:10:35.187620335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d59b46f79-k2f8k,Uid:5607229d-0e38-4f9d-9cc2-81045daa8a64,Namespace:calico-system,Attempt:0,} returns sandbox id \"91bb9a18b73244c0a491766ea9afac0ceb7d440bd631a3f03bbf1386058bea44\"" May 12 13:10:35.189888 containerd[1572]: time="2025-05-12T13:10:35.189863276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 12 13:10:35.208949 containerd[1572]: time="2025-05-12T13:10:35.208916547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bv8dm,Uid:0e84c619-c67c-42d8-bae2-8a28ab38e7f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1\"" May 12 13:10:36.450418 kubelet[2842]: E0512 13:10:36.450357 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:38.409231 containerd[1572]: time="2025-05-12T13:10:38.409177132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:38.410115 containerd[1572]: time="2025-05-12T13:10:38.410026596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 12 13:10:38.411231 containerd[1572]: time="2025-05-12T13:10:38.411201655Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:38.413043 containerd[1572]: time="2025-05-12T13:10:38.413005491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:38.413564 containerd[1572]: time="2025-05-12T13:10:38.413519892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.223630035s" May 12 13:10:38.413564 containerd[1572]: time="2025-05-12T13:10:38.413557624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 12 13:10:38.414424 containerd[1572]: time="2025-05-12T13:10:38.414389144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 12 13:10:38.421694 containerd[1572]: time="2025-05-12T13:10:38.421651147Z" level=info msg="CreateContainer within sandbox \"91bb9a18b73244c0a491766ea9afac0ceb7d440bd631a3f03bbf1386058bea44\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 12 13:10:38.432969 containerd[1572]: time="2025-05-12T13:10:38.432919406Z" level=info msg="Container 8ac411a5f15b43552fb14dc5e219c6381997dfe1651cb10eb94f4e67bf265508: CDI devices from CRI Config.CDIDevices: []" May 12 13:10:38.442006 containerd[1572]: time="2025-05-12T13:10:38.441963644Z" level=info msg="CreateContainer within sandbox \"91bb9a18b73244c0a491766ea9afac0ceb7d440bd631a3f03bbf1386058bea44\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8ac411a5f15b43552fb14dc5e219c6381997dfe1651cb10eb94f4e67bf265508\"" May 12 13:10:38.442374 containerd[1572]: time="2025-05-12T13:10:38.442345425Z" level=info msg="StartContainer for \"8ac411a5f15b43552fb14dc5e219c6381997dfe1651cb10eb94f4e67bf265508\"" May 12 13:10:38.443353 containerd[1572]: time="2025-05-12T13:10:38.443329202Z" level=info msg="connecting to shim 8ac411a5f15b43552fb14dc5e219c6381997dfe1651cb10eb94f4e67bf265508" address="unix:///run/containerd/s/2c19528a8fda56874e41ac07512da24cd119b9e51ba4dcf40cdd732fafbaf951" protocol=ttrpc version=3 May 12 13:10:38.449881 kubelet[2842]: E0512 13:10:38.449847 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:38.468537 systemd[1]: Started cri-containerd-8ac411a5f15b43552fb14dc5e219c6381997dfe1651cb10eb94f4e67bf265508.scope - libcontainer container 8ac411a5f15b43552fb14dc5e219c6381997dfe1651cb10eb94f4e67bf265508. May 12 13:10:38.521518 containerd[1572]: time="2025-05-12T13:10:38.521469779Z" level=info msg="StartContainer for \"8ac411a5f15b43552fb14dc5e219c6381997dfe1651cb10eb94f4e67bf265508\" returns successfully" May 12 13:10:39.582215 kubelet[2842]: E0512 13:10:39.582171 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.582215 kubelet[2842]: W0512 13:10:39.582197 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.582215 kubelet[2842]: E0512 13:10:39.582216 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.582808 kubelet[2842]: E0512 13:10:39.582434 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.582808 kubelet[2842]: W0512 13:10:39.582441 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.582808 kubelet[2842]: E0512 13:10:39.582449 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.582808 kubelet[2842]: E0512 13:10:39.582602 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.582808 kubelet[2842]: W0512 13:10:39.582608 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.582808 kubelet[2842]: E0512 13:10:39.582615 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.582808 kubelet[2842]: E0512 13:10:39.582762 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.582808 kubelet[2842]: W0512 13:10:39.582769 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.582808 kubelet[2842]: E0512 13:10:39.582776 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.583034 kubelet[2842]: E0512 13:10:39.582933 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.583034 kubelet[2842]: W0512 13:10:39.582940 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.583034 kubelet[2842]: E0512 13:10:39.582948 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.583119 kubelet[2842]: E0512 13:10:39.583093 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.583119 kubelet[2842]: W0512 13:10:39.583100 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.583119 kubelet[2842]: E0512 13:10:39.583107 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.583290 kubelet[2842]: E0512 13:10:39.583274 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.583290 kubelet[2842]: W0512 13:10:39.583284 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.583290 kubelet[2842]: E0512 13:10:39.583291 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.583452 kubelet[2842]: E0512 13:10:39.583438 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.583452 kubelet[2842]: W0512 13:10:39.583447 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.583503 kubelet[2842]: E0512 13:10:39.583454 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.583618 kubelet[2842]: E0512 13:10:39.583604 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.583618 kubelet[2842]: W0512 13:10:39.583613 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.583618 kubelet[2842]: E0512 13:10:39.583620 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.583781 kubelet[2842]: E0512 13:10:39.583764 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.583781 kubelet[2842]: W0512 13:10:39.583775 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.583781 kubelet[2842]: E0512 13:10:39.583782 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.583942 kubelet[2842]: E0512 13:10:39.583927 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.583942 kubelet[2842]: W0512 13:10:39.583937 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.583998 kubelet[2842]: E0512 13:10:39.583944 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.584104 kubelet[2842]: E0512 13:10:39.584090 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.584104 kubelet[2842]: W0512 13:10:39.584100 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.584167 kubelet[2842]: E0512 13:10:39.584108 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.584289 kubelet[2842]: E0512 13:10:39.584273 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.584289 kubelet[2842]: W0512 13:10:39.584284 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.584354 kubelet[2842]: E0512 13:10:39.584291 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.584501 kubelet[2842]: E0512 13:10:39.584486 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.584501 kubelet[2842]: W0512 13:10:39.584495 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.584501 kubelet[2842]: E0512 13:10:39.584502 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.584666 kubelet[2842]: E0512 13:10:39.584651 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.584666 kubelet[2842]: W0512 13:10:39.584661 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.584718 kubelet[2842]: E0512 13:10:39.584669 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.595891 kubelet[2842]: I0512 13:10:39.595817 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d59b46f79-k2f8k" podStartSLOduration=2.370693587 podStartE2EDuration="5.595800724s" podCreationTimestamp="2025-05-12 13:10:34 +0000 UTC" firstStartedPulling="2025-05-12 13:10:35.189114641 +0000 UTC m=+20.825462135" lastFinishedPulling="2025-05-12 13:10:38.414221758 +0000 UTC m=+24.050569272" observedRunningTime="2025-05-12 13:10:39.595572343 +0000 UTC m=+25.231919847" watchObservedRunningTime="2025-05-12 13:10:39.595800724 +0000 UTC m=+25.232148218" May 12 13:10:39.604965 kubelet[2842]: E0512 13:10:39.604928 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.604965 kubelet[2842]: W0512 13:10:39.604947 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.604965 kubelet[2842]: E0512 13:10:39.604964 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.605261 kubelet[2842]: E0512 13:10:39.605201 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.605261 kubelet[2842]: W0512 13:10:39.605235 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.605344 kubelet[2842]: E0512 13:10:39.605293 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.605613 kubelet[2842]: E0512 13:10:39.605587 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.605755 kubelet[2842]: W0512 13:10:39.605734 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.605807 kubelet[2842]: E0512 13:10:39.605757 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.606033 kubelet[2842]: E0512 13:10:39.606011 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.606033 kubelet[2842]: W0512 13:10:39.606030 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.606084 kubelet[2842]: E0512 13:10:39.606049 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.606318 kubelet[2842]: E0512 13:10:39.606284 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.606318 kubelet[2842]: W0512 13:10:39.606306 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.606392 kubelet[2842]: E0512 13:10:39.606328 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.606530 kubelet[2842]: E0512 13:10:39.606515 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.606530 kubelet[2842]: W0512 13:10:39.606527 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.606599 kubelet[2842]: E0512 13:10:39.606541 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.606800 kubelet[2842]: E0512 13:10:39.606773 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.606800 kubelet[2842]: W0512 13:10:39.606787 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.606800 kubelet[2842]: E0512 13:10:39.606804 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.606972 kubelet[2842]: E0512 13:10:39.606959 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.606972 kubelet[2842]: W0512 13:10:39.606967 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.607018 kubelet[2842]: E0512 13:10:39.606979 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.607159 kubelet[2842]: E0512 13:10:39.607139 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.607159 kubelet[2842]: W0512 13:10:39.607153 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.607283 kubelet[2842]: E0512 13:10:39.607171 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.607416 kubelet[2842]: E0512 13:10:39.607401 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.607453 kubelet[2842]: W0512 13:10:39.607416 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.607453 kubelet[2842]: E0512 13:10:39.607435 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.607669 kubelet[2842]: E0512 13:10:39.607657 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.607694 kubelet[2842]: W0512 13:10:39.607668 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.607732 kubelet[2842]: E0512 13:10:39.607712 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.607855 kubelet[2842]: E0512 13:10:39.607843 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.607883 kubelet[2842]: W0512 13:10:39.607856 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.607926 kubelet[2842]: E0512 13:10:39.607903 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.608117 kubelet[2842]: E0512 13:10:39.608095 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.608117 kubelet[2842]: W0512 13:10:39.608106 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.608192 kubelet[2842]: E0512 13:10:39.608123 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.608571 kubelet[2842]: E0512 13:10:39.608457 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.608571 kubelet[2842]: W0512 13:10:39.608470 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.608571 kubelet[2842]: E0512 13:10:39.608484 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.608722 kubelet[2842]: E0512 13:10:39.608699 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.608722 kubelet[2842]: W0512 13:10:39.608715 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.608817 kubelet[2842]: E0512 13:10:39.608732 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.608949 kubelet[2842]: E0512 13:10:39.608933 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.608949 kubelet[2842]: W0512 13:10:39.608947 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.609000 kubelet[2842]: E0512 13:10:39.608960 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.609241 kubelet[2842]: E0512 13:10:39.609222 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.609241 kubelet[2842]: W0512 13:10:39.609236 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.609356 kubelet[2842]: E0512 13:10:39.609277 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:39.609574 kubelet[2842]: E0512 13:10:39.609556 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:39.609574 kubelet[2842]: W0512 13:10:39.609572 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:39.609625 kubelet[2842]: E0512 13:10:39.609586 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.450742 kubelet[2842]: E0512 13:10:40.450706 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:40.592130 kubelet[2842]: E0512 13:10:40.592097 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.592130 kubelet[2842]: W0512 13:10:40.592116 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.592130 kubelet[2842]: E0512 13:10:40.592134 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.592552 kubelet[2842]: E0512 13:10:40.592316 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.592552 kubelet[2842]: W0512 13:10:40.592323 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.592552 kubelet[2842]: E0512 13:10:40.592331 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.592552 kubelet[2842]: E0512 13:10:40.592472 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.592552 kubelet[2842]: W0512 13:10:40.592478 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.592552 kubelet[2842]: E0512 13:10:40.592485 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.592671 kubelet[2842]: E0512 13:10:40.592622 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.592671 kubelet[2842]: W0512 13:10:40.592628 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.592671 kubelet[2842]: E0512 13:10:40.592638 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.592800 kubelet[2842]: E0512 13:10:40.592781 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.592800 kubelet[2842]: W0512 13:10:40.592790 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.592800 kubelet[2842]: E0512 13:10:40.592797 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.592933 kubelet[2842]: E0512 13:10:40.592916 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.592933 kubelet[2842]: W0512 13:10:40.592925 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.592933 kubelet[2842]: E0512 13:10:40.592931 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.593064 kubelet[2842]: E0512 13:10:40.593048 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.593064 kubelet[2842]: W0512 13:10:40.593056 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.593064 kubelet[2842]: E0512 13:10:40.593063 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.593215 kubelet[2842]: E0512 13:10:40.593203 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.593215 kubelet[2842]: W0512 13:10:40.593213 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.593286 kubelet[2842]: E0512 13:10:40.593222 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.593421 kubelet[2842]: E0512 13:10:40.593403 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.593421 kubelet[2842]: W0512 13:10:40.593415 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.593460 kubelet[2842]: E0512 13:10:40.593424 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.593584 kubelet[2842]: E0512 13:10:40.593568 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.593584 kubelet[2842]: W0512 13:10:40.593577 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.593584 kubelet[2842]: E0512 13:10:40.593585 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.593737 kubelet[2842]: E0512 13:10:40.593723 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.593737 kubelet[2842]: W0512 13:10:40.593733 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.593784 kubelet[2842]: E0512 13:10:40.593740 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.593884 kubelet[2842]: E0512 13:10:40.593874 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.593884 kubelet[2842]: W0512 13:10:40.593883 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.593931 kubelet[2842]: E0512 13:10:40.593890 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.594025 kubelet[2842]: E0512 13:10:40.594015 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.594025 kubelet[2842]: W0512 13:10:40.594023 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.594094 kubelet[2842]: E0512 13:10:40.594029 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.594160 kubelet[2842]: E0512 13:10:40.594145 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.594160 kubelet[2842]: W0512 13:10:40.594153 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.594160 kubelet[2842]: E0512 13:10:40.594159 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.594362 kubelet[2842]: E0512 13:10:40.594347 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.594362 kubelet[2842]: W0512 13:10:40.594360 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.594433 kubelet[2842]: E0512 13:10:40.594371 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.612515 kubelet[2842]: E0512 13:10:40.612483 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.612515 kubelet[2842]: W0512 13:10:40.612503 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.612674 kubelet[2842]: E0512 13:10:40.612518 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.612857 kubelet[2842]: E0512 13:10:40.612728 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.612857 kubelet[2842]: W0512 13:10:40.612745 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.612857 kubelet[2842]: E0512 13:10:40.612760 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.613036 kubelet[2842]: E0512 13:10:40.613018 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.613036 kubelet[2842]: W0512 13:10:40.613031 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.613118 kubelet[2842]: E0512 13:10:40.613043 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.613214 kubelet[2842]: E0512 13:10:40.613197 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.613304 kubelet[2842]: W0512 13:10:40.613221 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.613304 kubelet[2842]: E0512 13:10:40.613231 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.613513 kubelet[2842]: E0512 13:10:40.613492 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.613513 kubelet[2842]: W0512 13:10:40.613502 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.613513 kubelet[2842]: E0512 13:10:40.613513 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.613672 kubelet[2842]: E0512 13:10:40.613646 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.613672 kubelet[2842]: W0512 13:10:40.613651 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.613672 kubelet[2842]: E0512 13:10:40.613662 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.613830 kubelet[2842]: E0512 13:10:40.613818 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.613830 kubelet[2842]: W0512 13:10:40.613826 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.613902 kubelet[2842]: E0512 13:10:40.613836 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.614074 kubelet[2842]: E0512 13:10:40.614058 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.614074 kubelet[2842]: W0512 13:10:40.614070 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.614158 kubelet[2842]: E0512 13:10:40.614085 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.614294 kubelet[2842]: E0512 13:10:40.614281 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.614294 kubelet[2842]: W0512 13:10:40.614292 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.614362 kubelet[2842]: E0512 13:10:40.614313 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.614469 kubelet[2842]: E0512 13:10:40.614455 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.614469 kubelet[2842]: W0512 13:10:40.614464 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.614537 kubelet[2842]: E0512 13:10:40.614484 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.614622 kubelet[2842]: E0512 13:10:40.614609 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.614622 kubelet[2842]: W0512 13:10:40.614618 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.614673 kubelet[2842]: E0512 13:10:40.614631 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.614808 kubelet[2842]: E0512 13:10:40.614783 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.614808 kubelet[2842]: W0512 13:10:40.614798 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.614872 kubelet[2842]: E0512 13:10:40.614811 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.614997 kubelet[2842]: E0512 13:10:40.614983 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.614997 kubelet[2842]: W0512 13:10:40.614994 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.615056 kubelet[2842]: E0512 13:10:40.615006 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.615226 kubelet[2842]: E0512 13:10:40.615212 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.615226 kubelet[2842]: W0512 13:10:40.615223 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.615318 kubelet[2842]: E0512 13:10:40.615233 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.615400 kubelet[2842]: E0512 13:10:40.615387 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.615400 kubelet[2842]: W0512 13:10:40.615396 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.615458 kubelet[2842]: E0512 13:10:40.615409 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.615567 kubelet[2842]: E0512 13:10:40.615553 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.615567 kubelet[2842]: W0512 13:10:40.615564 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.615624 kubelet[2842]: E0512 13:10:40.615579 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.615765 kubelet[2842]: E0512 13:10:40.615748 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.615765 kubelet[2842]: W0512 13:10:40.615760 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.615841 kubelet[2842]: E0512 13:10:40.615773 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:40.616226 kubelet[2842]: E0512 13:10:40.616206 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:10:40.616226 kubelet[2842]: W0512 13:10:40.616218 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:10:40.616226 kubelet[2842]: E0512 13:10:40.616227 2842 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:10:41.018452 containerd[1572]: time="2025-05-12T13:10:41.018409608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:41.019554 containerd[1572]: time="2025-05-12T13:10:41.019521095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 12 13:10:41.021015 containerd[1572]: time="2025-05-12T13:10:41.020995315Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:41.022929 containerd[1572]: time="2025-05-12T13:10:41.022895960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:41.023617 containerd[1572]: time="2025-05-12T13:10:41.023587124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.609170018s" May 12 13:10:41.023617 containerd[1572]: time="2025-05-12T13:10:41.023611640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 12 13:10:41.025050 containerd[1572]: time="2025-05-12T13:10:41.025024174Z" level=info msg="CreateContainer within sandbox \"49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 12 13:10:41.036103 containerd[1572]: time="2025-05-12T13:10:41.036057885Z" level=info msg="Container e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c: CDI devices from CRI Config.CDIDevices: []" May 12 13:10:41.045471 containerd[1572]: time="2025-05-12T13:10:41.045419843Z" level=info msg="CreateContainer within sandbox \"49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c\"" May 12 13:10:41.045988 containerd[1572]: time="2025-05-12T13:10:41.045941747Z" level=info msg="StartContainer for \"e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c\"" May 12 13:10:41.047470 containerd[1572]: time="2025-05-12T13:10:41.047425747Z" level=info msg="connecting to shim e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c" address="unix:///run/containerd/s/f5b582b32d740913b4376f8caff8c565a0d3bb109c03e7ead292141753cc8427" protocol=ttrpc version=3 May 12 13:10:41.074401 systemd[1]: Started cri-containerd-e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c.scope - libcontainer container e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c. May 12 13:10:41.116383 containerd[1572]: time="2025-05-12T13:10:41.116347531Z" level=info msg="StartContainer for \"e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c\" returns successfully" May 12 13:10:41.128212 systemd[1]: cri-containerd-e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c.scope: Deactivated successfully. May 12 13:10:41.129911 containerd[1572]: time="2025-05-12T13:10:41.129880546Z" level=info msg="received exit event container_id:\"e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c\" id:\"e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c\" pid:3494 exited_at:{seconds:1747055441 nanos:129544783}" May 12 13:10:41.129999 containerd[1572]: time="2025-05-12T13:10:41.129977779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c\" id:\"e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c\" pid:3494 exited_at:{seconds:1747055441 nanos:129544783}" May 12 13:10:41.152587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f691387d3e4b6c9296778df787db66470ecfd458eaf7e98359025522cbfc2c-rootfs.mount: Deactivated successfully. May 12 13:10:42.449701 kubelet[2842]: E0512 13:10:42.449661 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:42.496816 systemd[1]: Started sshd@7-10.0.0.126:22-10.0.0.1:54368.service - OpenSSH per-connection server daemon (10.0.0.1:54368). May 12 13:10:42.525080 containerd[1572]: time="2025-05-12T13:10:42.525042626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 12 13:10:42.554883 sshd[3537]: Accepted publickey for core from 10.0.0.1 port 54368 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:10:42.556326 sshd-session[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:10:42.560607 systemd-logind[1558]: New session 8 of user core. May 12 13:10:42.570377 systemd[1]: Started session-8.scope - Session 8 of User core. May 12 13:10:42.675379 sshd[3539]: Connection closed by 10.0.0.1 port 54368 May 12 13:10:42.675637 sshd-session[3537]: pam_unix(sshd:session): session closed for user core May 12 13:10:42.679232 systemd[1]: sshd@7-10.0.0.126:22-10.0.0.1:54368.service: Deactivated successfully. May 12 13:10:42.681154 systemd[1]: session-8.scope: Deactivated successfully. May 12 13:10:42.681898 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. May 12 13:10:42.683132 systemd-logind[1558]: Removed session 8. May 12 13:10:44.450157 kubelet[2842]: E0512 13:10:44.450110 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:46.450873 kubelet[2842]: E0512 13:10:46.450818 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:47.686644 systemd[1]: Started sshd@8-10.0.0.126:22-10.0.0.1:54380.service - OpenSSH per-connection server daemon (10.0.0.1:54380). May 12 13:10:47.945693 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 54380 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:10:47.947210 sshd-session[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:10:47.952872 systemd-logind[1558]: New session 9 of user core. May 12 13:10:47.957368 systemd[1]: Started session-9.scope - Session 9 of User core. May 12 13:10:48.076488 sshd[3559]: Connection closed by 10.0.0.1 port 54380 May 12 13:10:48.076750 sshd-session[3553]: pam_unix(sshd:session): session closed for user core May 12 13:10:48.080618 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. May 12 13:10:48.080865 systemd[1]: sshd@8-10.0.0.126:22-10.0.0.1:54380.service: Deactivated successfully. May 12 13:10:48.083337 systemd[1]: session-9.scope: Deactivated successfully. May 12 13:10:48.087434 systemd-logind[1558]: Removed session 9. May 12 13:10:48.450014 kubelet[2842]: E0512 13:10:48.449963 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:49.657353 containerd[1572]: time="2025-05-12T13:10:49.657291785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:49.658206 containerd[1572]: time="2025-05-12T13:10:49.658174996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 12 13:10:49.660020 containerd[1572]: time="2025-05-12T13:10:49.659992036Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:49.661882 containerd[1572]: time="2025-05-12T13:10:49.661834523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:10:49.662650 containerd[1572]: time="2025-05-12T13:10:49.662604913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 7.137523805s" May 12 13:10:49.662650 containerd[1572]: time="2025-05-12T13:10:49.662641522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 12 13:10:49.664672 containerd[1572]: time="2025-05-12T13:10:49.664640574Z" level=info msg="CreateContainer within sandbox \"49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 12 13:10:49.673810 containerd[1572]: time="2025-05-12T13:10:49.673771337Z" level=info msg="Container 27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929: CDI devices from CRI Config.CDIDevices: []" May 12 13:10:49.683751 containerd[1572]: time="2025-05-12T13:10:49.683711494Z" level=info msg="CreateContainer within sandbox \"49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929\"" May 12 13:10:49.684215 containerd[1572]: time="2025-05-12T13:10:49.684160569Z" level=info msg="StartContainer for \"27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929\"" May 12 13:10:49.685672 containerd[1572]: time="2025-05-12T13:10:49.685639873Z" level=info msg="connecting to shim 27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929" address="unix:///run/containerd/s/f5b582b32d740913b4376f8caff8c565a0d3bb109c03e7ead292141753cc8427" protocol=ttrpc version=3 May 12 13:10:49.711428 systemd[1]: Started cri-containerd-27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929.scope - libcontainer container 27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929. May 12 13:10:49.753525 containerd[1572]: time="2025-05-12T13:10:49.753477027Z" level=info msg="StartContainer for \"27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929\" returns successfully" May 12 13:10:50.450007 kubelet[2842]: E0512 13:10:50.449959 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:51.425339 containerd[1572]: time="2025-05-12T13:10:51.425279875Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 13:10:51.428211 systemd[1]: cri-containerd-27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929.scope: Deactivated successfully. May 12 13:10:51.428535 systemd[1]: cri-containerd-27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929.scope: Consumed 499ms CPU time, 156.2M memory peak, 4K read from disk, 154M written to disk. May 12 13:10:51.430641 containerd[1572]: time="2025-05-12T13:10:51.430585936Z" level=info msg="received exit event container_id:\"27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929\" id:\"27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929\" pid:3588 exited_at:{seconds:1747055451 nanos:430364009}" May 12 13:10:51.430853 containerd[1572]: time="2025-05-12T13:10:51.430801031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929\" id:\"27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929\" pid:3588 exited_at:{seconds:1747055451 nanos:430364009}" May 12 13:10:51.437263 kubelet[2842]: I0512 13:10:51.436799 2842 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 12 13:10:51.457781 kubelet[2842]: I0512 13:10:51.457738 2842 topology_manager.go:215] "Topology Admit Handler" podUID="593e3376-13c4-48db-91c1-b011b6e2f9ed" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nrfvd" May 12 13:10:51.458181 kubelet[2842]: I0512 13:10:51.457902 2842 topology_manager.go:215] "Topology Admit Handler" podUID="0ec87d19-6b2f-4a96-8c61-20622365556d" podNamespace="calico-apiserver" podName="calico-apiserver-6d5d7fc86f-xt4rv" May 12 13:10:51.459652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27a911daa32cac5705800511ec88ec75e7e34d390e46433e0b7c3a9777412929-rootfs.mount: Deactivated successfully. May 12 13:10:51.459777 kubelet[2842]: I0512 13:10:51.459739 2842 topology_manager.go:215] "Topology Admit Handler" podUID="3e848244-737b-478e-8968-b75481ca35df" podNamespace="calico-system" podName="calico-kube-controllers-5948b4d8f7-qlvpt" May 12 13:10:51.461881 kubelet[2842]: I0512 13:10:51.461849 2842 topology_manager.go:215] "Topology Admit Handler" podUID="0b2247fa-6232-4294-bb3c-70f80086d491" podNamespace="calico-apiserver" podName="calico-apiserver-6d5d7fc86f-xn57t" May 12 13:10:51.462721 kubelet[2842]: I0512 13:10:51.462698 2842 topology_manager.go:215] "Topology Admit Handler" podUID="6cda24eb-9679-4236-8871-2a549e762495" podNamespace="kube-system" podName="coredns-7db6d8ff4d-68qdn" May 12 13:10:51.472104 systemd[1]: Created slice kubepods-besteffort-pod0ec87d19_6b2f_4a96_8c61_20622365556d.slice - libcontainer container kubepods-besteffort-pod0ec87d19_6b2f_4a96_8c61_20622365556d.slice. May 12 13:10:51.477415 systemd[1]: Created slice kubepods-burstable-pod593e3376_13c4_48db_91c1_b011b6e2f9ed.slice - libcontainer container kubepods-burstable-pod593e3376_13c4_48db_91c1_b011b6e2f9ed.slice. May 12 13:10:51.488031 kubelet[2842]: I0512 13:10:51.486511 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wtr2\" (UniqueName: \"kubernetes.io/projected/593e3376-13c4-48db-91c1-b011b6e2f9ed-kube-api-access-8wtr2\") pod \"coredns-7db6d8ff4d-nrfvd\" (UID: \"593e3376-13c4-48db-91c1-b011b6e2f9ed\") " pod="kube-system/coredns-7db6d8ff4d-nrfvd" May 12 13:10:51.488031 kubelet[2842]: I0512 13:10:51.486548 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ec87d19-6b2f-4a96-8c61-20622365556d-calico-apiserver-certs\") pod \"calico-apiserver-6d5d7fc86f-xt4rv\" (UID: \"0ec87d19-6b2f-4a96-8c61-20622365556d\") " pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xt4rv" May 12 13:10:51.488031 kubelet[2842]: I0512 13:10:51.486568 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5knc8\" (UniqueName: \"kubernetes.io/projected/0ec87d19-6b2f-4a96-8c61-20622365556d-kube-api-access-5knc8\") pod \"calico-apiserver-6d5d7fc86f-xt4rv\" (UID: \"0ec87d19-6b2f-4a96-8c61-20622365556d\") " pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xt4rv" May 12 13:10:51.488031 kubelet[2842]: I0512 13:10:51.486583 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cda24eb-9679-4236-8871-2a549e762495-config-volume\") pod \"coredns-7db6d8ff4d-68qdn\" (UID: \"6cda24eb-9679-4236-8871-2a549e762495\") " pod="kube-system/coredns-7db6d8ff4d-68qdn" May 12 13:10:51.488031 kubelet[2842]: I0512 13:10:51.486598 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b2247fa-6232-4294-bb3c-70f80086d491-calico-apiserver-certs\") pod \"calico-apiserver-6d5d7fc86f-xn57t\" (UID: \"0b2247fa-6232-4294-bb3c-70f80086d491\") " pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xn57t" May 12 13:10:51.487079 systemd[1]: Created slice kubepods-besteffort-pod0b2247fa_6232_4294_bb3c_70f80086d491.slice - libcontainer container kubepods-besteffort-pod0b2247fa_6232_4294_bb3c_70f80086d491.slice. May 12 13:10:51.488301 kubelet[2842]: I0512 13:10:51.486613 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r7pg\" (UniqueName: \"kubernetes.io/projected/6cda24eb-9679-4236-8871-2a549e762495-kube-api-access-9r7pg\") pod \"coredns-7db6d8ff4d-68qdn\" (UID: \"6cda24eb-9679-4236-8871-2a549e762495\") " pod="kube-system/coredns-7db6d8ff4d-68qdn" May 12 13:10:51.488301 kubelet[2842]: I0512 13:10:51.486629 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/593e3376-13c4-48db-91c1-b011b6e2f9ed-config-volume\") pod \"coredns-7db6d8ff4d-nrfvd\" (UID: \"593e3376-13c4-48db-91c1-b011b6e2f9ed\") " pod="kube-system/coredns-7db6d8ff4d-nrfvd" May 12 13:10:51.488301 kubelet[2842]: I0512 13:10:51.486643 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pth4n\" (UniqueName: \"kubernetes.io/projected/0b2247fa-6232-4294-bb3c-70f80086d491-kube-api-access-pth4n\") pod \"calico-apiserver-6d5d7fc86f-xn57t\" (UID: \"0b2247fa-6232-4294-bb3c-70f80086d491\") " pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xn57t" May 12 13:10:51.488301 kubelet[2842]: I0512 13:10:51.486657 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw5c9\" (UniqueName: \"kubernetes.io/projected/3e848244-737b-478e-8968-b75481ca35df-kube-api-access-cw5c9\") pod \"calico-kube-controllers-5948b4d8f7-qlvpt\" (UID: \"3e848244-737b-478e-8968-b75481ca35df\") " pod="calico-system/calico-kube-controllers-5948b4d8f7-qlvpt" May 12 13:10:51.488301 kubelet[2842]: I0512 13:10:51.486672 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e848244-737b-478e-8968-b75481ca35df-tigera-ca-bundle\") pod \"calico-kube-controllers-5948b4d8f7-qlvpt\" (UID: \"3e848244-737b-478e-8968-b75481ca35df\") " pod="calico-system/calico-kube-controllers-5948b4d8f7-qlvpt" May 12 13:10:51.493922 systemd[1]: Created slice kubepods-burstable-pod6cda24eb_9679_4236_8871_2a549e762495.slice - libcontainer container kubepods-burstable-pod6cda24eb_9679_4236_8871_2a549e762495.slice. May 12 13:10:51.499419 systemd[1]: Created slice kubepods-besteffort-pod3e848244_737b_478e_8968_b75481ca35df.slice - libcontainer container kubepods-besteffort-pod3e848244_737b_478e_8968_b75481ca35df.slice. May 12 13:10:51.785945 containerd[1572]: time="2025-05-12T13:10:51.785824881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d7fc86f-xt4rv,Uid:0ec87d19-6b2f-4a96-8c61-20622365556d,Namespace:calico-apiserver,Attempt:0,}" May 12 13:10:51.786153 containerd[1572]: time="2025-05-12T13:10:51.785831844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrfvd,Uid:593e3376-13c4-48db-91c1-b011b6e2f9ed,Namespace:kube-system,Attempt:0,}" May 12 13:10:51.792896 containerd[1572]: time="2025-05-12T13:10:51.792850839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d7fc86f-xn57t,Uid:0b2247fa-6232-4294-bb3c-70f80086d491,Namespace:calico-apiserver,Attempt:0,}" May 12 13:10:51.797555 containerd[1572]: time="2025-05-12T13:10:51.797518970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-68qdn,Uid:6cda24eb-9679-4236-8871-2a549e762495,Namespace:kube-system,Attempt:0,}" May 12 13:10:51.803294 containerd[1572]: time="2025-05-12T13:10:51.802771932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5948b4d8f7-qlvpt,Uid:3e848244-737b-478e-8968-b75481ca35df,Namespace:calico-system,Attempt:0,}" May 12 13:10:51.881700 containerd[1572]: time="2025-05-12T13:10:51.881586035Z" level=error msg="Failed to destroy network for sandbox \"b283b42d0a8d21e5d1c509bfe7b3f5b2c776f9d47bfbc57d41016c67237ec3de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.884470 containerd[1572]: time="2025-05-12T13:10:51.884424825Z" level=error msg="Failed to destroy network for sandbox \"672e912627a9dd8b132edb6f6978cc3efcbfe88ebc4465f28cb910f6914693f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.884756 containerd[1572]: time="2025-05-12T13:10:51.884730039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d7fc86f-xn57t,Uid:0b2247fa-6232-4294-bb3c-70f80086d491,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b283b42d0a8d21e5d1c509bfe7b3f5b2c776f9d47bfbc57d41016c67237ec3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.885321 kubelet[2842]: E0512 13:10:51.885018 2842 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b283b42d0a8d21e5d1c509bfe7b3f5b2c776f9d47bfbc57d41016c67237ec3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.885321 kubelet[2842]: E0512 13:10:51.885100 2842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b283b42d0a8d21e5d1c509bfe7b3f5b2c776f9d47bfbc57d41016c67237ec3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xn57t" May 12 13:10:51.885321 kubelet[2842]: E0512 13:10:51.885125 2842 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b283b42d0a8d21e5d1c509bfe7b3f5b2c776f9d47bfbc57d41016c67237ec3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xn57t" May 12 13:10:51.885783 kubelet[2842]: E0512 13:10:51.885161 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5d7fc86f-xn57t_calico-apiserver(0b2247fa-6232-4294-bb3c-70f80086d491)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5d7fc86f-xn57t_calico-apiserver(0b2247fa-6232-4294-bb3c-70f80086d491)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b283b42d0a8d21e5d1c509bfe7b3f5b2c776f9d47bfbc57d41016c67237ec3de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xn57t" podUID="0b2247fa-6232-4294-bb3c-70f80086d491" May 12 13:10:51.885830 containerd[1572]: time="2025-05-12T13:10:51.885721063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrfvd,Uid:593e3376-13c4-48db-91c1-b011b6e2f9ed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"672e912627a9dd8b132edb6f6978cc3efcbfe88ebc4465f28cb910f6914693f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.885977 kubelet[2842]: E0512 13:10:51.885961 2842 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672e912627a9dd8b132edb6f6978cc3efcbfe88ebc4465f28cb910f6914693f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.886090 kubelet[2842]: E0512 13:10:51.886058 2842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672e912627a9dd8b132edb6f6978cc3efcbfe88ebc4465f28cb910f6914693f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nrfvd" May 12 13:10:51.886181 kubelet[2842]: E0512 13:10:51.886139 2842 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672e912627a9dd8b132edb6f6978cc3efcbfe88ebc4465f28cb910f6914693f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nrfvd" May 12 13:10:51.886791 kubelet[2842]: E0512 13:10:51.886734 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nrfvd_kube-system(593e3376-13c4-48db-91c1-b011b6e2f9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nrfvd_kube-system(593e3376-13c4-48db-91c1-b011b6e2f9ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"672e912627a9dd8b132edb6f6978cc3efcbfe88ebc4465f28cb910f6914693f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nrfvd" podUID="593e3376-13c4-48db-91c1-b011b6e2f9ed" May 12 13:10:51.899007 containerd[1572]: time="2025-05-12T13:10:51.898963254Z" level=error msg="Failed to destroy network for sandbox \"e6f3a697922b970e5464f9ee5a435b73df9ec36718751ad3bbd456156bfc404e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.900371 containerd[1572]: time="2025-05-12T13:10:51.900321219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-68qdn,Uid:6cda24eb-9679-4236-8871-2a549e762495,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f3a697922b970e5464f9ee5a435b73df9ec36718751ad3bbd456156bfc404e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.900682 kubelet[2842]: E0512 13:10:51.900558 2842 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f3a697922b970e5464f9ee5a435b73df9ec36718751ad3bbd456156bfc404e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.900682 kubelet[2842]: E0512 13:10:51.900615 2842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f3a697922b970e5464f9ee5a435b73df9ec36718751ad3bbd456156bfc404e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-68qdn" May 12 13:10:51.900682 kubelet[2842]: E0512 13:10:51.900635 2842 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f3a697922b970e5464f9ee5a435b73df9ec36718751ad3bbd456156bfc404e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-68qdn" May 12 13:10:51.900809 kubelet[2842]: E0512 13:10:51.900672 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-68qdn_kube-system(6cda24eb-9679-4236-8871-2a549e762495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-68qdn_kube-system(6cda24eb-9679-4236-8871-2a549e762495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6f3a697922b970e5464f9ee5a435b73df9ec36718751ad3bbd456156bfc404e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-68qdn" podUID="6cda24eb-9679-4236-8871-2a549e762495" May 12 13:10:51.902718 containerd[1572]: time="2025-05-12T13:10:51.902690616Z" level=error msg="Failed to destroy network for sandbox \"2ac8cfb05d09a382927cf4ac8731e50538fd55865216cffb0ea2752bc1c49f63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.904047 containerd[1572]: time="2025-05-12T13:10:51.903993737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d7fc86f-xt4rv,Uid:0ec87d19-6b2f-4a96-8c61-20622365556d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac8cfb05d09a382927cf4ac8731e50538fd55865216cffb0ea2752bc1c49f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.904222 kubelet[2842]: E0512 13:10:51.904192 2842 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac8cfb05d09a382927cf4ac8731e50538fd55865216cffb0ea2752bc1c49f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.904506 kubelet[2842]: E0512 13:10:51.904238 2842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac8cfb05d09a382927cf4ac8731e50538fd55865216cffb0ea2752bc1c49f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xt4rv" May 12 13:10:51.904506 kubelet[2842]: E0512 13:10:51.904271 2842 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac8cfb05d09a382927cf4ac8731e50538fd55865216cffb0ea2752bc1c49f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xt4rv" May 12 13:10:51.904506 kubelet[2842]: E0512 13:10:51.904313 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5d7fc86f-xt4rv_calico-apiserver(0ec87d19-6b2f-4a96-8c61-20622365556d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5d7fc86f-xt4rv_calico-apiserver(0ec87d19-6b2f-4a96-8c61-20622365556d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ac8cfb05d09a382927cf4ac8731e50538fd55865216cffb0ea2752bc1c49f63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xt4rv" podUID="0ec87d19-6b2f-4a96-8c61-20622365556d" May 12 13:10:51.909173 containerd[1572]: time="2025-05-12T13:10:51.909126513Z" level=error msg="Failed to destroy network for sandbox \"96f5398e0467b1563e122afa67afa399d6d01e29f5b407aa89e28e5970eaeb03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.910534 containerd[1572]: time="2025-05-12T13:10:51.910468999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5948b4d8f7-qlvpt,Uid:3e848244-737b-478e-8968-b75481ca35df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f5398e0467b1563e122afa67afa399d6d01e29f5b407aa89e28e5970eaeb03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.910688 kubelet[2842]: E0512 13:10:51.910662 2842 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f5398e0467b1563e122afa67afa399d6d01e29f5b407aa89e28e5970eaeb03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:51.910759 kubelet[2842]: E0512 13:10:51.910693 2842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f5398e0467b1563e122afa67afa399d6d01e29f5b407aa89e28e5970eaeb03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5948b4d8f7-qlvpt" May 12 13:10:51.910759 kubelet[2842]: E0512 13:10:51.910712 2842 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f5398e0467b1563e122afa67afa399d6d01e29f5b407aa89e28e5970eaeb03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5948b4d8f7-qlvpt" May 12 13:10:51.910759 kubelet[2842]: E0512 13:10:51.910741 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5948b4d8f7-qlvpt_calico-system(3e848244-737b-478e-8968-b75481ca35df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5948b4d8f7-qlvpt_calico-system(3e848244-737b-478e-8968-b75481ca35df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96f5398e0467b1563e122afa67afa399d6d01e29f5b407aa89e28e5970eaeb03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5948b4d8f7-qlvpt" podUID="3e848244-737b-478e-8968-b75481ca35df" May 12 13:10:52.455957 systemd[1]: Created slice kubepods-besteffort-pod1c46340b_2b7a_4015_8ce6_2f3287662c95.slice - libcontainer container kubepods-besteffort-pod1c46340b_2b7a_4015_8ce6_2f3287662c95.slice. May 12 13:10:52.458585 containerd[1572]: time="2025-05-12T13:10:52.458555237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mgnmw,Uid:1c46340b-2b7a-4015-8ce6-2f3287662c95,Namespace:calico-system,Attempt:0,}" May 12 13:10:52.505950 containerd[1572]: time="2025-05-12T13:10:52.505904092Z" level=error msg="Failed to destroy network for sandbox \"aae795b9193e4d079697e665b8bf2c2db472ae7aee0bf0a921a3040dea73be7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:52.507382 containerd[1572]: time="2025-05-12T13:10:52.507344210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mgnmw,Uid:1c46340b-2b7a-4015-8ce6-2f3287662c95,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aae795b9193e4d079697e665b8bf2c2db472ae7aee0bf0a921a3040dea73be7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:52.507562 kubelet[2842]: E0512 13:10:52.507524 2842 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aae795b9193e4d079697e665b8bf2c2db472ae7aee0bf0a921a3040dea73be7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:10:52.508036 kubelet[2842]: E0512 13:10:52.507575 2842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aae795b9193e4d079697e665b8bf2c2db472ae7aee0bf0a921a3040dea73be7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mgnmw" May 12 13:10:52.508036 kubelet[2842]: E0512 13:10:52.507594 2842 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aae795b9193e4d079697e665b8bf2c2db472ae7aee0bf0a921a3040dea73be7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mgnmw" May 12 13:10:52.508036 kubelet[2842]: E0512 13:10:52.507638 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mgnmw_calico-system(1c46340b-2b7a-4015-8ce6-2f3287662c95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mgnmw_calico-system(1c46340b-2b7a-4015-8ce6-2f3287662c95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aae795b9193e4d079697e665b8bf2c2db472ae7aee0bf0a921a3040dea73be7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mgnmw" podUID="1c46340b-2b7a-4015-8ce6-2f3287662c95" May 12 13:10:52.508071 systemd[1]: run-netns-cni\x2d255ac2f0\x2dcf0f\x2d1f85\x2d804e\x2df59cba684e7b.mount: Deactivated successfully. May 12 13:10:52.545534 containerd[1572]: time="2025-05-12T13:10:52.545497631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 12 13:10:53.093462 systemd[1]: Started sshd@9-10.0.0.126:22-10.0.0.1:59258.service - OpenSSH per-connection server daemon (10.0.0.1:59258). May 12 13:10:53.153166 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 59258 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:10:53.154650 sshd-session[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:10:53.158947 systemd-logind[1558]: New session 10 of user core. May 12 13:10:53.168383 systemd[1]: Started session-10.scope - Session 10 of User core. May 12 13:10:53.267917 sshd[3858]: Connection closed by 10.0.0.1 port 59258 May 12 13:10:53.268201 sshd-session[3856]: pam_unix(sshd:session): session closed for user core May 12 13:10:53.271829 systemd[1]: sshd@9-10.0.0.126:22-10.0.0.1:59258.service: Deactivated successfully. May 12 13:10:53.273753 systemd[1]: session-10.scope: Deactivated successfully. May 12 13:10:53.274621 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. May 12 13:10:53.275800 systemd-logind[1558]: Removed session 10. May 12 13:10:58.280692 systemd[1]: Started sshd@10-10.0.0.126:22-10.0.0.1:45176.service - OpenSSH per-connection server daemon (10.0.0.1:45176). May 12 13:10:58.340030 sshd[3877]: Accepted publickey for core from 10.0.0.1 port 45176 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:10:58.342683 sshd-session[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:10:58.348447 systemd-logind[1558]: New session 11 of user core. May 12 13:10:58.353412 systemd[1]: Started session-11.scope - Session 11 of User core. May 12 13:10:58.469954 sshd[3879]: Connection closed by 10.0.0.1 port 45176 May 12 13:10:58.471417 sshd-session[3877]: pam_unix(sshd:session): session closed for user core May 12 13:10:58.480914 systemd[1]: sshd@10-10.0.0.126:22-10.0.0.1:45176.service: Deactivated successfully. May 12 13:10:58.483069 systemd[1]: session-11.scope: Deactivated successfully. May 12 13:10:58.484236 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. May 12 13:10:58.489630 systemd[1]: Started sshd@11-10.0.0.126:22-10.0.0.1:45192.service - OpenSSH per-connection server daemon (10.0.0.1:45192). May 12 13:10:58.491023 systemd-logind[1558]: Removed session 11. May 12 13:10:58.536741 sshd[3894]: Accepted publickey for core from 10.0.0.1 port 45192 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:10:58.538359 sshd-session[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:10:58.543134 systemd-logind[1558]: New session 12 of user core. May 12 13:10:58.553403 systemd[1]: Started session-12.scope - Session 12 of User core. May 12 13:10:58.753177 sshd[3896]: Connection closed by 10.0.0.1 port 45192 May 12 13:10:58.753584 sshd-session[3894]: pam_unix(sshd:session): session closed for user core May 12 13:10:58.766095 systemd[1]: sshd@11-10.0.0.126:22-10.0.0.1:45192.service: Deactivated successfully. May 12 13:10:58.768489 systemd[1]: session-12.scope: Deactivated successfully. May 12 13:10:58.769911 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. May 12 13:10:58.774664 systemd[1]: Started sshd@12-10.0.0.126:22-10.0.0.1:45206.service - OpenSSH per-connection server daemon (10.0.0.1:45206). May 12 13:10:58.775819 systemd-logind[1558]: Removed session 12. May 12 13:10:58.822452 sshd[3908]: Accepted publickey for core from 10.0.0.1 port 45206 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:10:58.823976 sshd-session[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:10:58.830513 systemd-logind[1558]: New session 13 of user core. May 12 13:10:58.835383 systemd[1]: Started session-13.scope - Session 13 of User core. May 12 13:10:58.990517 sshd[3910]: Connection closed by 10.0.0.1 port 45206 May 12 13:10:59.002556 sshd-session[3908]: pam_unix(sshd:session): session closed for user core May 12 13:10:59.006702 systemd[1]: sshd@12-10.0.0.126:22-10.0.0.1:45206.service: Deactivated successfully. May 12 13:10:59.008876 systemd[1]: session-13.scope: Deactivated successfully. May 12 13:10:59.010665 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. May 12 13:10:59.011856 systemd-logind[1558]: Removed session 13. May 12 13:11:00.273852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437762654.mount: Deactivated successfully. May 12 13:11:01.394656 containerd[1572]: time="2025-05-12T13:11:01.394601626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:01.395530 containerd[1572]: time="2025-05-12T13:11:01.395501667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 12 13:11:01.396689 containerd[1572]: time="2025-05-12T13:11:01.396658089Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:01.398537 containerd[1572]: time="2025-05-12T13:11:01.398508826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:01.398972 containerd[1572]: time="2025-05-12T13:11:01.398938714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.853209247s" May 12 13:11:01.399010 containerd[1572]: time="2025-05-12T13:11:01.398970934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 12 13:11:01.408341 containerd[1572]: time="2025-05-12T13:11:01.408295080Z" level=info msg="CreateContainer within sandbox \"49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 12 13:11:01.421083 containerd[1572]: time="2025-05-12T13:11:01.421026317Z" level=info msg="Container a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c: CDI devices from CRI Config.CDIDevices: []" May 12 13:11:01.432783 containerd[1572]: time="2025-05-12T13:11:01.432733980Z" level=info msg="CreateContainer within sandbox \"49d5d6b4c7db718a58f6bee14179397ea7fa005ac05b095d41f4f428ca7bb3d1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c\"" May 12 13:11:01.433506 containerd[1572]: time="2025-05-12T13:11:01.433453933Z" level=info msg="StartContainer for \"a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c\"" May 12 13:11:01.463654 containerd[1572]: time="2025-05-12T13:11:01.463599033Z" level=info msg="connecting to shim a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c" address="unix:///run/containerd/s/f5b582b32d740913b4376f8caff8c565a0d3bb109c03e7ead292141753cc8427" protocol=ttrpc version=3 May 12 13:11:01.490436 systemd[1]: Started cri-containerd-a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c.scope - libcontainer container a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c. May 12 13:11:01.864478 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 12 13:11:01.883203 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 12 13:11:01.907006 containerd[1572]: time="2025-05-12T13:11:01.906950282Z" level=info msg="StartContainer for \"a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c\" returns successfully" May 12 13:11:02.986143 containerd[1572]: time="2025-05-12T13:11:02.986085173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c\" id:\"c90a36896fa233d7d827549eddb29a63d23838048caa8deae227dc07bad268ee\" pid:4003 exit_status:1 exited_at:{seconds:1747055462 nanos:985734494}" May 12 13:11:02.989864 kubelet[2842]: I0512 13:11:02.989805 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bv8dm" podStartSLOduration=2.800447409 podStartE2EDuration="28.989790653s" podCreationTimestamp="2025-05-12 13:10:34 +0000 UTC" firstStartedPulling="2025-05-12 13:10:35.210225202 +0000 UTC m=+20.846572696" lastFinishedPulling="2025-05-12 13:11:01.399568446 +0000 UTC m=+47.035915940" observedRunningTime="2025-05-12 13:11:02.989367959 +0000 UTC m=+48.625715453" watchObservedRunningTime="2025-05-12 13:11:02.989790653 +0000 UTC m=+48.626138137" May 12 13:11:03.451270 containerd[1572]: time="2025-05-12T13:11:03.451111743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-68qdn,Uid:6cda24eb-9679-4236-8871-2a549e762495,Namespace:kube-system,Attempt:0,}" May 12 13:11:03.651414 systemd-networkd[1501]: vxlan.calico: Link UP May 12 13:11:03.651425 systemd-networkd[1501]: vxlan.calico: Gained carrier May 12 13:11:03.984417 containerd[1572]: time="2025-05-12T13:11:03.984377488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c\" id:\"c41a8959d761501d71f51d351a8f757df1d111d9b73a2b1ee50ccf2b4c7f02e0\" pid:4232 exit_status:1 exited_at:{seconds:1747055463 nanos:984075841}" May 12 13:11:04.005299 systemd[1]: Started sshd@13-10.0.0.126:22-10.0.0.1:45212.service - OpenSSH per-connection server daemon (10.0.0.1:45212). May 12 13:11:04.056502 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 45212 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:04.057908 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:04.062105 systemd-logind[1558]: New session 14 of user core. May 12 13:11:04.075373 systemd[1]: Started session-14.scope - Session 14 of User core. May 12 13:11:04.208233 sshd[4264]: Connection closed by 10.0.0.1 port 45212 May 12 13:11:04.208912 sshd-session[4262]: pam_unix(sshd:session): session closed for user core May 12 13:11:04.210789 systemd-networkd[1501]: cali68ef80c929c: Link UP May 12 13:11:04.211047 systemd-networkd[1501]: cali68ef80c929c: Gained carrier May 12 13:11:04.214577 systemd[1]: sshd@13-10.0.0.126:22-10.0.0.1:45212.service: Deactivated successfully. May 12 13:11:04.219029 systemd[1]: session-14.scope: Deactivated successfully. May 12 13:11:04.220065 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. May 12 13:11:04.222429 systemd-logind[1558]: Removed session 14. May 12 13:11:04.373933 containerd[1572]: 2025-05-12 13:11:03.777 [INFO][4191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0 coredns-7db6d8ff4d- kube-system 6cda24eb-9679-4236-8871-2a549e762495 747 0 2025-05-12 13:10:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-68qdn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68ef80c929c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68qdn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68qdn-" May 12 13:11:04.373933 containerd[1572]: 2025-05-12 13:11:03.777 [INFO][4191] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68qdn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" May 12 13:11:04.373933 containerd[1572]: 2025-05-12 13:11:03.842 [INFO][4204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" HandleID="k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Workload="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.895 [INFO][4204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" HandleID="k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Workload="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e1020), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-68qdn", "timestamp":"2025-05-12 13:11:03.842364385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.895 [INFO][4204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.895 [INFO][4204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.895 [INFO][4204] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.897 [INFO][4204] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" host="localhost" May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.901 [INFO][4204] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.906 [INFO][4204] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.908 [INFO][4204] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.910 [INFO][4204] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:11:04.374964 containerd[1572]: 2025-05-12 13:11:03.910 [INFO][4204] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" host="localhost" May 12 13:11:04.375361 containerd[1572]: 2025-05-12 13:11:03.912 [INFO][4204] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229 May 12 13:11:04.375361 containerd[1572]: 2025-05-12 13:11:03.949 [INFO][4204] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" host="localhost" May 12 13:11:04.375361 containerd[1572]: 2025-05-12 13:11:04.200 [INFO][4204] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" host="localhost" May 12 13:11:04.375361 containerd[1572]: 2025-05-12 13:11:04.200 [INFO][4204] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" host="localhost" May 12 13:11:04.375361 containerd[1572]: 2025-05-12 13:11:04.200 [INFO][4204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:11:04.375361 containerd[1572]: 2025-05-12 13:11:04.200 [INFO][4204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" HandleID="k8s-pod-network.c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Workload="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" May 12 13:11:04.375483 containerd[1572]: 2025-05-12 13:11:04.203 [INFO][4191] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68qdn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6cda24eb-9679-4236-8871-2a549e762495", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-68qdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68ef80c929c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:04.375538 containerd[1572]: 2025-05-12 13:11:04.203 [INFO][4191] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68qdn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" May 12 13:11:04.375538 containerd[1572]: 2025-05-12 13:11:04.203 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68ef80c929c ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68qdn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" May 12 13:11:04.375538 containerd[1572]: 2025-05-12 13:11:04.211 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68qdn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" May 12 13:11:04.375598 containerd[1572]: 2025-05-12 13:11:04.211 [INFO][4191] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68qdn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6cda24eb-9679-4236-8871-2a549e762495", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229", Pod:"coredns-7db6d8ff4d-68qdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68ef80c929c", MAC:"62:7f:05:6a:23:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:04.375598 containerd[1572]: 2025-05-12 13:11:04.369 [INFO][4191] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" Namespace="kube-system" Pod="coredns-7db6d8ff4d-68qdn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--68qdn-eth0" May 12 13:11:04.453496 containerd[1572]: time="2025-05-12T13:11:04.453392284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrfvd,Uid:593e3376-13c4-48db-91c1-b011b6e2f9ed,Namespace:kube-system,Attempt:0,}" May 12 13:11:04.453608 containerd[1572]: time="2025-05-12T13:11:04.453542036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5948b4d8f7-qlvpt,Uid:3e848244-737b-478e-8968-b75481ca35df,Namespace:calico-system,Attempt:0,}" May 12 13:11:04.619585 containerd[1572]: time="2025-05-12T13:11:04.619517960Z" level=info msg="connecting to shim c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229" address="unix:///run/containerd/s/5bff047e91a6bf5d27c5009bd06a1adfddebb9d9592e14690ce031c3549b4709" namespace=k8s.io protocol=ttrpc version=3 May 12 13:11:04.646392 systemd[1]: Started cri-containerd-c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229.scope - libcontainer container c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229. May 12 13:11:04.664928 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:11:04.677661 systemd-networkd[1501]: caliad04e8f2de2: Link UP May 12 13:11:04.678485 systemd-networkd[1501]: caliad04e8f2de2: Gained carrier May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.534 [INFO][4308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0 coredns-7db6d8ff4d- kube-system 593e3376-13c4-48db-91c1-b011b6e2f9ed 745 0 2025-05-12 13:10:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-nrfvd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad04e8f2de2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nrfvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nrfvd-" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.534 [INFO][4308] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nrfvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.570 [INFO][4331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" HandleID="k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Workload="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.626 [INFO][4331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" HandleID="k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Workload="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005b9bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-nrfvd", "timestamp":"2025-05-12 13:11:04.570090792 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.626 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.627 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.628 [INFO][4331] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.631 [INFO][4331] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.636 [INFO][4331] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.641 [INFO][4331] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.642 [INFO][4331] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.644 [INFO][4331] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.645 [INFO][4331] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.646 [INFO][4331] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616 May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.653 [INFO][4331] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.668 [INFO][4331] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.668 [INFO][4331] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" host="localhost" May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.668 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:11:04.718450 containerd[1572]: 2025-05-12 13:11:04.668 [INFO][4331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" HandleID="k8s-pod-network.4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Workload="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" May 12 13:11:04.719031 containerd[1572]: 2025-05-12 13:11:04.671 [INFO][4308] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nrfvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"593e3376-13c4-48db-91c1-b011b6e2f9ed", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-nrfvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad04e8f2de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:04.719031 containerd[1572]: 2025-05-12 13:11:04.671 [INFO][4308] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nrfvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" May 12 13:11:04.719031 containerd[1572]: 2025-05-12 13:11:04.671 [INFO][4308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad04e8f2de2 ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nrfvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" May 12 13:11:04.719031 containerd[1572]: 2025-05-12 13:11:04.678 [INFO][4308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nrfvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" May 12 13:11:04.719031 containerd[1572]: 2025-05-12 13:11:04.679 [INFO][4308] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nrfvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"593e3376-13c4-48db-91c1-b011b6e2f9ed", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616", Pod:"coredns-7db6d8ff4d-nrfvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad04e8f2de2", MAC:"1e:f1:8a:cd:c3:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:04.719031 containerd[1572]: 2025-05-12 13:11:04.715 [INFO][4308] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nrfvd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nrfvd-eth0" May 12 13:11:04.732370 containerd[1572]: time="2025-05-12T13:11:04.732326402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-68qdn,Uid:6cda24eb-9679-4236-8871-2a549e762495,Namespace:kube-system,Attempt:0,} returns sandbox id \"c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229\"" May 12 13:11:04.744759 containerd[1572]: time="2025-05-12T13:11:04.744704149Z" level=info msg="CreateContainer within sandbox \"c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 13:11:04.753077 systemd-networkd[1501]: califa9d3884d13: Link UP May 12 13:11:04.753615 systemd-networkd[1501]: califa9d3884d13: Gained carrier May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.534 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0 calico-kube-controllers-5948b4d8f7- calico-system 3e848244-737b-478e-8968-b75481ca35df 749 0 2025-05-12 13:10:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5948b4d8f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5948b4d8f7-qlvpt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califa9d3884d13 [] []}} ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Namespace="calico-system" Pod="calico-kube-controllers-5948b4d8f7-qlvpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.534 [INFO][4297] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Namespace="calico-system" Pod="calico-kube-controllers-5948b4d8f7-qlvpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.571 [INFO][4333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" HandleID="k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Workload="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.631 [INFO][4333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" HandleID="k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Workload="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000536d70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5948b4d8f7-qlvpt", "timestamp":"2025-05-12 13:11:04.571621167 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.631 [INFO][4333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.668 [INFO][4333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.668 [INFO][4333] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.671 [INFO][4333] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.676 [INFO][4333] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.685 [INFO][4333] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.687 [INFO][4333] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.715 [INFO][4333] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.715 [INFO][4333] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.717 [INFO][4333] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1 May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.721 [INFO][4333] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.748 [INFO][4333] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.748 [INFO][4333] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" host="localhost" May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.748 [INFO][4333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:11:04.779015 containerd[1572]: 2025-05-12 13:11:04.748 [INFO][4333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" HandleID="k8s-pod-network.4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Workload="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" May 12 13:11:04.779598 containerd[1572]: 2025-05-12 13:11:04.751 [INFO][4297] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Namespace="calico-system" Pod="calico-kube-controllers-5948b4d8f7-qlvpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0", GenerateName:"calico-kube-controllers-5948b4d8f7-", Namespace:"calico-system", SelfLink:"", UID:"3e848244-737b-478e-8968-b75481ca35df", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5948b4d8f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5948b4d8f7-qlvpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califa9d3884d13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:04.779598 containerd[1572]: 2025-05-12 13:11:04.751 [INFO][4297] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Namespace="calico-system" Pod="calico-kube-controllers-5948b4d8f7-qlvpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" May 12 13:11:04.779598 containerd[1572]: 2025-05-12 13:11:04.751 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa9d3884d13 ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Namespace="calico-system" Pod="calico-kube-controllers-5948b4d8f7-qlvpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" May 12 13:11:04.779598 containerd[1572]: 2025-05-12 13:11:04.753 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Namespace="calico-system" Pod="calico-kube-controllers-5948b4d8f7-qlvpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" May 12 13:11:04.779598 containerd[1572]: 2025-05-12 13:11:04.754 [INFO][4297] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Namespace="calico-system" Pod="calico-kube-controllers-5948b4d8f7-qlvpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0", GenerateName:"calico-kube-controllers-5948b4d8f7-", Namespace:"calico-system", SelfLink:"", UID:"3e848244-737b-478e-8968-b75481ca35df", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5948b4d8f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1", Pod:"calico-kube-controllers-5948b4d8f7-qlvpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califa9d3884d13", MAC:"1a:04:49:64:f7:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:04.779598 containerd[1572]: 2025-05-12 13:11:04.775 [INFO][4297] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" Namespace="calico-system" Pod="calico-kube-controllers-5948b4d8f7-qlvpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5948b4d8f7--qlvpt-eth0" May 12 13:11:04.902444 systemd-networkd[1501]: vxlan.calico: Gained IPv6LL May 12 13:11:05.183925 containerd[1572]: time="2025-05-12T13:11:05.183735174Z" level=info msg="Container 9aeebf35ca5a941f9c60d760e33b8abc57300873ee2a73bb17eb8740c562de65: CDI devices from CRI Config.CDIDevices: []" May 12 13:11:05.451293 containerd[1572]: time="2025-05-12T13:11:05.451124652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d7fc86f-xn57t,Uid:0b2247fa-6232-4294-bb3c-70f80086d491,Namespace:calico-apiserver,Attempt:0,}" May 12 13:11:06.055640 containerd[1572]: time="2025-05-12T13:11:06.055602859Z" level=info msg="CreateContainer within sandbox \"c09de41e8075b462094642885fa903f6b1966d165ddf01b9b5e2d4294f927229\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9aeebf35ca5a941f9c60d760e33b8abc57300873ee2a73bb17eb8740c562de65\"" May 12 13:11:06.056904 containerd[1572]: time="2025-05-12T13:11:06.056846183Z" level=info msg="StartContainer for \"9aeebf35ca5a941f9c60d760e33b8abc57300873ee2a73bb17eb8740c562de65\"" May 12 13:11:06.057778 containerd[1572]: time="2025-05-12T13:11:06.057742637Z" level=info msg="connecting to shim 9aeebf35ca5a941f9c60d760e33b8abc57300873ee2a73bb17eb8740c562de65" address="unix:///run/containerd/s/5bff047e91a6bf5d27c5009bd06a1adfddebb9d9592e14690ce031c3549b4709" protocol=ttrpc version=3 May 12 13:11:06.063867 containerd[1572]: time="2025-05-12T13:11:06.063670048Z" level=info msg="connecting to shim 4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1" address="unix:///run/containerd/s/74fc2564e93917f3bd598e9617b2b0b9806d3c2d70de6b75eb1a785c2069f862" namespace=k8s.io protocol=ttrpc version=3 May 12 13:11:06.075284 containerd[1572]: time="2025-05-12T13:11:06.074810667Z" level=info msg="connecting to shim 4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616" address="unix:///run/containerd/s/7b2896b65fc8bed40e81f0b619437e2024ec208000caae62770a8460ab925f9b" namespace=k8s.io protocol=ttrpc version=3 May 12 13:11:06.082398 systemd[1]: Started cri-containerd-9aeebf35ca5a941f9c60d760e33b8abc57300873ee2a73bb17eb8740c562de65.scope - libcontainer container 9aeebf35ca5a941f9c60d760e33b8abc57300873ee2a73bb17eb8740c562de65. May 12 13:11:06.087113 systemd[1]: Started cri-containerd-4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1.scope - libcontainer container 4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1. May 12 13:11:06.103359 systemd[1]: Started cri-containerd-4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616.scope - libcontainer container 4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616. May 12 13:11:06.109863 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:11:06.117903 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:11:06.119699 systemd-networkd[1501]: caliad04e8f2de2: Gained IPv6LL May 12 13:11:06.144011 containerd[1572]: time="2025-05-12T13:11:06.143622982Z" level=info msg="StartContainer for \"9aeebf35ca5a941f9c60d760e33b8abc57300873ee2a73bb17eb8740c562de65\" returns successfully" May 12 13:11:06.154938 containerd[1572]: time="2025-05-12T13:11:06.154869470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5948b4d8f7-qlvpt,Uid:3e848244-737b-478e-8968-b75481ca35df,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1\"" May 12 13:11:06.156595 containerd[1572]: time="2025-05-12T13:11:06.156565145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 12 13:11:06.166591 containerd[1572]: time="2025-05-12T13:11:06.166522452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrfvd,Uid:593e3376-13c4-48db-91c1-b011b6e2f9ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616\"" May 12 13:11:06.169375 containerd[1572]: time="2025-05-12T13:11:06.169338640Z" level=info msg="CreateContainer within sandbox \"4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 13:11:06.192425 containerd[1572]: time="2025-05-12T13:11:06.192381490Z" level=info msg="Container 437ccc2b9a22dad2a1dcabe065e836682c86c3949473ef8783beaab7dadf30e2: CDI devices from CRI Config.CDIDevices: []" May 12 13:11:06.199599 containerd[1572]: time="2025-05-12T13:11:06.199447739Z" level=info msg="CreateContainer within sandbox \"4488ad1afd43a09e4ebf9c9b673c49b8cad2f340eefe27dfeed86291f0384616\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"437ccc2b9a22dad2a1dcabe065e836682c86c3949473ef8783beaab7dadf30e2\"" May 12 13:11:06.200158 containerd[1572]: time="2025-05-12T13:11:06.200131252Z" level=info msg="StartContainer for \"437ccc2b9a22dad2a1dcabe065e836682c86c3949473ef8783beaab7dadf30e2\"" May 12 13:11:06.201463 containerd[1572]: time="2025-05-12T13:11:06.201410936Z" level=info msg="connecting to shim 437ccc2b9a22dad2a1dcabe065e836682c86c3949473ef8783beaab7dadf30e2" address="unix:///run/containerd/s/7b2896b65fc8bed40e81f0b619437e2024ec208000caae62770a8460ab925f9b" protocol=ttrpc version=3 May 12 13:11:06.239522 systemd[1]: Started cri-containerd-437ccc2b9a22dad2a1dcabe065e836682c86c3949473ef8783beaab7dadf30e2.scope - libcontainer container 437ccc2b9a22dad2a1dcabe065e836682c86c3949473ef8783beaab7dadf30e2. May 12 13:11:06.250403 systemd-networkd[1501]: cali68ef80c929c: Gained IPv6LL May 12 13:11:06.254380 systemd-networkd[1501]: calia734c9aa74e: Link UP May 12 13:11:06.255837 systemd-networkd[1501]: calia734c9aa74e: Gained carrier May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.166 [INFO][4524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0 calico-apiserver-6d5d7fc86f- calico-apiserver 0b2247fa-6232-4294-bb3c-70f80086d491 748 0 2025-05-12 13:10:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5d7fc86f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d5d7fc86f-xn57t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia734c9aa74e [] []}} ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xn57t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.166 [INFO][4524] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xn57t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.196 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" HandleID="k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Workload="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.206 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" HandleID="k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Workload="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042c400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d5d7fc86f-xn57t", "timestamp":"2025-05-12 13:11:06.196047785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.206 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.206 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.206 [INFO][4563] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.208 [INFO][4563] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.215 [INFO][4563] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.220 [INFO][4563] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.221 [INFO][4563] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.224 [INFO][4563] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.224 [INFO][4563] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.225 [INFO][4563] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4 May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.230 [INFO][4563] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.239 [INFO][4563] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.240 [INFO][4563] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" host="localhost" May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.240 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:11:06.279168 containerd[1572]: 2025-05-12 13:11:06.240 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" HandleID="k8s-pod-network.c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Workload="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" May 12 13:11:06.279993 containerd[1572]: 2025-05-12 13:11:06.244 [INFO][4524] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xn57t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0", GenerateName:"calico-apiserver-6d5d7fc86f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b2247fa-6232-4294-bb3c-70f80086d491", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5d7fc86f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d5d7fc86f-xn57t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia734c9aa74e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:06.279993 containerd[1572]: 2025-05-12 13:11:06.244 [INFO][4524] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xn57t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" May 12 13:11:06.279993 containerd[1572]: 2025-05-12 13:11:06.244 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia734c9aa74e ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xn57t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" May 12 13:11:06.279993 containerd[1572]: 2025-05-12 13:11:06.257 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xn57t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" May 12 13:11:06.279993 containerd[1572]: 2025-05-12 13:11:06.260 [INFO][4524] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xn57t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0", GenerateName:"calico-apiserver-6d5d7fc86f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b2247fa-6232-4294-bb3c-70f80086d491", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5d7fc86f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4", Pod:"calico-apiserver-6d5d7fc86f-xn57t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia734c9aa74e", MAC:"a6:4d:d7:09:ac:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:06.279993 containerd[1572]: 2025-05-12 13:11:06.276 [INFO][4524] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xn57t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xn57t-eth0" May 12 13:11:06.450508 containerd[1572]: time="2025-05-12T13:11:06.450463643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d7fc86f-xt4rv,Uid:0ec87d19-6b2f-4a96-8c61-20622365556d,Namespace:calico-apiserver,Attempt:0,}" May 12 13:11:06.473495 containerd[1572]: time="2025-05-12T13:11:06.473461739Z" level=info msg="StartContainer for \"437ccc2b9a22dad2a1dcabe065e836682c86c3949473ef8783beaab7dadf30e2\" returns successfully" May 12 13:11:06.497504 containerd[1572]: time="2025-05-12T13:11:06.497459031Z" level=info msg="connecting to shim c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4" address="unix:///run/containerd/s/376a893e885e9f233bd0397d720fa77d20f1c6a1c183a2f3b73c64977a943144" namespace=k8s.io protocol=ttrpc version=3 May 12 13:11:06.525376 systemd[1]: Started cri-containerd-c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4.scope - libcontainer container c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4. May 12 13:11:06.538846 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:11:06.574868 containerd[1572]: time="2025-05-12T13:11:06.574826670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d7fc86f-xn57t,Uid:0b2247fa-6232-4294-bb3c-70f80086d491,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4\"" May 12 13:11:06.592781 systemd-networkd[1501]: calibf70c0f1eb3: Link UP May 12 13:11:06.592968 systemd-networkd[1501]: calibf70c0f1eb3: Gained carrier May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.514 [INFO][4624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0 calico-apiserver-6d5d7fc86f- calico-apiserver 0ec87d19-6b2f-4a96-8c61-20622365556d 741 0 2025-05-12 13:10:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5d7fc86f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d5d7fc86f-xt4rv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf70c0f1eb3 [] []}} ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xt4rv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.515 [INFO][4624] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xt4rv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.549 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" HandleID="k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Workload="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.557 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" HandleID="k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Workload="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d5d7fc86f-xt4rv", "timestamp":"2025-05-12 13:11:06.549758197 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.558 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.558 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.558 [INFO][4671] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.560 [INFO][4671] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.564 [INFO][4671] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.570 [INFO][4671] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.572 [INFO][4671] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.574 [INFO][4671] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.574 [INFO][4671] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.576 [INFO][4671] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4 May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.580 [INFO][4671] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.586 [INFO][4671] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.586 [INFO][4671] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" host="localhost" May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.586 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:11:06.604812 containerd[1572]: 2025-05-12 13:11:06.586 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" HandleID="k8s-pod-network.ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Workload="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" May 12 13:11:06.605336 containerd[1572]: 2025-05-12 13:11:06.589 [INFO][4624] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xt4rv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0", GenerateName:"calico-apiserver-6d5d7fc86f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ec87d19-6b2f-4a96-8c61-20622365556d", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5d7fc86f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d5d7fc86f-xt4rv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf70c0f1eb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:06.605336 containerd[1572]: 2025-05-12 13:11:06.589 [INFO][4624] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xt4rv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" May 12 13:11:06.605336 containerd[1572]: 2025-05-12 13:11:06.589 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf70c0f1eb3 ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xt4rv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" May 12 13:11:06.605336 containerd[1572]: 2025-05-12 13:11:06.591 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xt4rv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" May 12 13:11:06.605336 containerd[1572]: 2025-05-12 13:11:06.592 [INFO][4624] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xt4rv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0", GenerateName:"calico-apiserver-6d5d7fc86f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ec87d19-6b2f-4a96-8c61-20622365556d", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5d7fc86f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4", Pod:"calico-apiserver-6d5d7fc86f-xt4rv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf70c0f1eb3", MAC:"aa:a6:f8:35:31:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:06.605336 containerd[1572]: 2025-05-12 13:11:06.600 [INFO][4624] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d5d7fc86f-xt4rv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5d7fc86f--xt4rv-eth0" May 12 13:11:06.632381 containerd[1572]: time="2025-05-12T13:11:06.632319881Z" level=info msg="connecting to shim ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4" address="unix:///run/containerd/s/e061dae6f68c44f7737977a871a2944a4099ce377af57f9cb7ec7a459eff9750" namespace=k8s.io protocol=ttrpc version=3 May 12 13:11:06.660399 systemd[1]: Started cri-containerd-ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4.scope - libcontainer container ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4. May 12 13:11:06.674297 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:11:06.704468 containerd[1572]: time="2025-05-12T13:11:06.704382028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5d7fc86f-xt4rv,Uid:0ec87d19-6b2f-4a96-8c61-20622365556d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4\"" May 12 13:11:06.758411 systemd-networkd[1501]: califa9d3884d13: Gained IPv6LL May 12 13:11:06.932090 kubelet[2842]: I0512 13:11:06.931996 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nrfvd" podStartSLOduration=37.931979187 podStartE2EDuration="37.931979187s" podCreationTimestamp="2025-05-12 13:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:11:06.931644027 +0000 UTC m=+52.567991522" watchObservedRunningTime="2025-05-12 13:11:06.931979187 +0000 UTC m=+52.568326671" May 12 13:11:06.944041 kubelet[2842]: I0512 13:11:06.943968 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-68qdn" podStartSLOduration=37.943951188 podStartE2EDuration="37.943951188s" podCreationTimestamp="2025-05-12 13:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:11:06.943324992 +0000 UTC m=+52.579672486" watchObservedRunningTime="2025-05-12 13:11:06.943951188 +0000 UTC m=+52.580298682" May 12 13:11:07.451281 containerd[1572]: time="2025-05-12T13:11:07.451220173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mgnmw,Uid:1c46340b-2b7a-4015-8ce6-2f3287662c95,Namespace:calico-system,Attempt:0,}" May 12 13:11:07.556982 systemd-networkd[1501]: cali51f3558fea2: Link UP May 12 13:11:07.557170 systemd-networkd[1501]: cali51f3558fea2: Gained carrier May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.488 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mgnmw-eth0 csi-node-driver- calico-system 1c46340b-2b7a-4015-8ce6-2f3287662c95 591 0 2025-05-12 13:10:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mgnmw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali51f3558fea2 [] []}} ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Namespace="calico-system" Pod="csi-node-driver-mgnmw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mgnmw-" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.488 [INFO][4758] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Namespace="calico-system" Pod="csi-node-driver-mgnmw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mgnmw-eth0" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.520 [INFO][4773] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" HandleID="k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Workload="localhost-k8s-csi--node--driver--mgnmw-eth0" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.528 [INFO][4773] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" HandleID="k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Workload="localhost-k8s-csi--node--driver--mgnmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042fbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mgnmw", "timestamp":"2025-05-12 13:11:07.520614191 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.528 [INFO][4773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.528 [INFO][4773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.528 [INFO][4773] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.529 [INFO][4773] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.535 [INFO][4773] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.539 [INFO][4773] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.540 [INFO][4773] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.542 [INFO][4773] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.542 [INFO][4773] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.543 [INFO][4773] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.546 [INFO][4773] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.551 [INFO][4773] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.551 [INFO][4773] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" host="localhost" May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.551 [INFO][4773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:11:07.570483 containerd[1572]: 2025-05-12 13:11:07.551 [INFO][4773] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" HandleID="k8s-pod-network.101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Workload="localhost-k8s-csi--node--driver--mgnmw-eth0" May 12 13:11:07.571317 containerd[1572]: 2025-05-12 13:11:07.554 [INFO][4758] cni-plugin/k8s.go 386: Populated endpoint ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Namespace="calico-system" Pod="csi-node-driver-mgnmw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mgnmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mgnmw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c46340b-2b7a-4015-8ce6-2f3287662c95", ResourceVersion:"591", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mgnmw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51f3558fea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:07.571317 containerd[1572]: 2025-05-12 13:11:07.554 [INFO][4758] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Namespace="calico-system" Pod="csi-node-driver-mgnmw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mgnmw-eth0" May 12 13:11:07.571317 containerd[1572]: 2025-05-12 13:11:07.554 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51f3558fea2 ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Namespace="calico-system" Pod="csi-node-driver-mgnmw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mgnmw-eth0" May 12 13:11:07.571317 containerd[1572]: 2025-05-12 13:11:07.557 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Namespace="calico-system" Pod="csi-node-driver-mgnmw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mgnmw-eth0" May 12 13:11:07.571317 containerd[1572]: 2025-05-12 13:11:07.558 [INFO][4758] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Namespace="calico-system" Pod="csi-node-driver-mgnmw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mgnmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mgnmw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c46340b-2b7a-4015-8ce6-2f3287662c95", ResourceVersion:"591", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed", Pod:"csi-node-driver-mgnmw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51f3558fea2", MAC:"72:c1:1d:d9:5b:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:11:07.571317 containerd[1572]: 2025-05-12 13:11:07.566 [INFO][4758] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" Namespace="calico-system" Pod="csi-node-driver-mgnmw" WorkloadEndpoint="localhost-k8s-csi--node--driver--mgnmw-eth0" May 12 13:11:07.611610 containerd[1572]: time="2025-05-12T13:11:07.611554345Z" level=info msg="connecting to shim 101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed" address="unix:///run/containerd/s/4f21f6116accf5a16868429054736c116de4c95e14acf4a312ba7132d51b6887" namespace=k8s.io protocol=ttrpc version=3 May 12 13:11:07.638439 systemd[1]: Started cri-containerd-101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed.scope - libcontainer container 101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed. May 12 13:11:07.649914 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:11:07.669590 containerd[1572]: time="2025-05-12T13:11:07.669543800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mgnmw,Uid:1c46340b-2b7a-4015-8ce6-2f3287662c95,Namespace:calico-system,Attempt:0,} returns sandbox id \"101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed\"" May 12 13:11:07.911446 systemd-networkd[1501]: calia734c9aa74e: Gained IPv6LL May 12 13:11:08.358424 systemd-networkd[1501]: calibf70c0f1eb3: Gained IPv6LL May 12 13:11:08.876758 containerd[1572]: time="2025-05-12T13:11:08.876686654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:08.877631 containerd[1572]: time="2025-05-12T13:11:08.877571165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 12 13:11:08.879141 containerd[1572]: time="2025-05-12T13:11:08.879096558Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:08.882221 containerd[1572]: time="2025-05-12T13:11:08.882185818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:08.882822 containerd[1572]: time="2025-05-12T13:11:08.882780996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.726186766s" May 12 13:11:08.882822 containerd[1572]: time="2025-05-12T13:11:08.882810962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 12 13:11:08.885319 containerd[1572]: time="2025-05-12T13:11:08.885292482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 12 13:11:08.895471 containerd[1572]: time="2025-05-12T13:11:08.895423863Z" level=info msg="CreateContainer within sandbox \"4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 12 13:11:08.903924 containerd[1572]: time="2025-05-12T13:11:08.903876534Z" level=info msg="Container 8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8: CDI devices from CRI Config.CDIDevices: []" May 12 13:11:08.911471 containerd[1572]: time="2025-05-12T13:11:08.911438701Z" level=info msg="CreateContainer within sandbox \"4c1c508df6e76c378e13e602ed2875a8bac2e6ebb853e9712454d1c7443481c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8\"" May 12 13:11:08.912814 containerd[1572]: time="2025-05-12T13:11:08.911806202Z" level=info msg="StartContainer for \"8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8\"" May 12 13:11:08.912814 containerd[1572]: time="2025-05-12T13:11:08.912738402Z" level=info msg="connecting to shim 8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8" address="unix:///run/containerd/s/74fc2564e93917f3bd598e9617b2b0b9806d3c2d70de6b75eb1a785c2069f862" protocol=ttrpc version=3 May 12 13:11:08.936446 systemd[1]: Started cri-containerd-8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8.scope - libcontainer container 8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8. May 12 13:11:09.158139 containerd[1572]: time="2025-05-12T13:11:09.158093618Z" level=info msg="StartContainer for \"8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8\" returns successfully" May 12 13:11:09.223762 systemd[1]: Started sshd@14-10.0.0.126:22-10.0.0.1:54622.service - OpenSSH per-connection server daemon (10.0.0.1:54622). May 12 13:11:09.277114 sshd[4887]: Accepted publickey for core from 10.0.0.1 port 54622 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:09.278913 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:09.284406 systemd-logind[1558]: New session 15 of user core. May 12 13:11:09.289417 systemd[1]: Started session-15.scope - Session 15 of User core. May 12 13:11:09.409076 sshd[4889]: Connection closed by 10.0.0.1 port 54622 May 12 13:11:09.409305 sshd-session[4887]: pam_unix(sshd:session): session closed for user core May 12 13:11:09.413737 systemd[1]: sshd@14-10.0.0.126:22-10.0.0.1:54622.service: Deactivated successfully. May 12 13:11:09.415933 systemd[1]: session-15.scope: Deactivated successfully. May 12 13:11:09.416857 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. May 12 13:11:09.418661 systemd-logind[1558]: Removed session 15. May 12 13:11:09.574481 systemd-networkd[1501]: cali51f3558fea2: Gained IPv6LL May 12 13:11:10.983181 containerd[1572]: time="2025-05-12T13:11:10.983053813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8\" id:\"7363887d66ee0a3b0312215852051f81ed3b0339248aa78d8f499c0b03335ea4\" pid:4915 exited_at:{seconds:1747055470 nanos:982746556}" May 12 13:11:10.997132 kubelet[2842]: I0512 13:11:10.996936 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5948b4d8f7-qlvpt" podStartSLOduration=34.26784264 podStartE2EDuration="36.996919653s" podCreationTimestamp="2025-05-12 13:10:34 +0000 UTC" firstStartedPulling="2025-05-12 13:11:06.155965799 +0000 UTC m=+51.792313293" lastFinishedPulling="2025-05-12 13:11:08.885042812 +0000 UTC m=+54.521390306" observedRunningTime="2025-05-12 13:11:10.00187453 +0000 UTC m=+55.638222024" watchObservedRunningTime="2025-05-12 13:11:10.996919653 +0000 UTC m=+56.633267147" May 12 13:11:12.448708 containerd[1572]: time="2025-05-12T13:11:12.448609817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:12.449802 containerd[1572]: time="2025-05-12T13:11:12.449773431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 12 13:11:12.451013 containerd[1572]: time="2025-05-12T13:11:12.450967132Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:12.452948 containerd[1572]: time="2025-05-12T13:11:12.452918815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:12.453621 containerd[1572]: time="2025-05-12T13:11:12.453588222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.568263028s" May 12 13:11:12.453621 containerd[1572]: time="2025-05-12T13:11:12.453618709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 12 13:11:12.454420 containerd[1572]: time="2025-05-12T13:11:12.454391510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 12 13:11:12.455825 containerd[1572]: time="2025-05-12T13:11:12.455795245Z" level=info msg="CreateContainer within sandbox \"c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 12 13:11:12.464349 containerd[1572]: time="2025-05-12T13:11:12.464293186Z" level=info msg="Container 02a4dd0a1995adf515a1859e347ac2e6dde0914a936badb23228d90c1bc99785: CDI devices from CRI Config.CDIDevices: []" May 12 13:11:12.470911 containerd[1572]: time="2025-05-12T13:11:12.470849254Z" level=info msg="CreateContainer within sandbox \"c1d151f5102aff2701cbbc8ceaf3f7b81028493fca2d067c73e454244feb77f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"02a4dd0a1995adf515a1859e347ac2e6dde0914a936badb23228d90c1bc99785\"" May 12 13:11:12.473416 containerd[1572]: time="2025-05-12T13:11:12.473381106Z" level=info msg="StartContainer for \"02a4dd0a1995adf515a1859e347ac2e6dde0914a936badb23228d90c1bc99785\"" May 12 13:11:12.474400 containerd[1572]: time="2025-05-12T13:11:12.474377416Z" level=info msg="connecting to shim 02a4dd0a1995adf515a1859e347ac2e6dde0914a936badb23228d90c1bc99785" address="unix:///run/containerd/s/376a893e885e9f233bd0397d720fa77d20f1c6a1c183a2f3b73c64977a943144" protocol=ttrpc version=3 May 12 13:11:12.505393 systemd[1]: Started cri-containerd-02a4dd0a1995adf515a1859e347ac2e6dde0914a936badb23228d90c1bc99785.scope - libcontainer container 02a4dd0a1995adf515a1859e347ac2e6dde0914a936badb23228d90c1bc99785. May 12 13:11:12.551676 containerd[1572]: time="2025-05-12T13:11:12.551636711Z" level=info msg="StartContainer for \"02a4dd0a1995adf515a1859e347ac2e6dde0914a936badb23228d90c1bc99785\" returns successfully" May 12 13:11:12.888279 containerd[1572]: time="2025-05-12T13:11:12.887971192Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:12.889085 containerd[1572]: time="2025-05-12T13:11:12.889026583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 12 13:11:12.890633 containerd[1572]: time="2025-05-12T13:11:12.890584417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 436.168942ms" May 12 13:11:12.890633 containerd[1572]: time="2025-05-12T13:11:12.890623300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 12 13:11:12.891881 containerd[1572]: time="2025-05-12T13:11:12.891850714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 12 13:11:12.893569 containerd[1572]: time="2025-05-12T13:11:12.893538201Z" level=info msg="CreateContainer within sandbox \"ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 12 13:11:12.904457 containerd[1572]: time="2025-05-12T13:11:12.904408086Z" level=info msg="Container c48fb870602ce0bdd6020f5fd98a1d6acbc108e64f2d37a05fbb743f515a4cad: CDI devices from CRI Config.CDIDevices: []" May 12 13:11:12.914146 containerd[1572]: time="2025-05-12T13:11:12.914093406Z" level=info msg="CreateContainer within sandbox \"ec37995f5181c069d6a23fec13bce31f25c1f2265f2d63fb8bac3cf3c16f38f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c48fb870602ce0bdd6020f5fd98a1d6acbc108e64f2d37a05fbb743f515a4cad\"" May 12 13:11:12.914858 containerd[1572]: time="2025-05-12T13:11:12.914828877Z" level=info msg="StartContainer for \"c48fb870602ce0bdd6020f5fd98a1d6acbc108e64f2d37a05fbb743f515a4cad\"" May 12 13:11:12.915797 containerd[1572]: time="2025-05-12T13:11:12.915772709Z" level=info msg="connecting to shim c48fb870602ce0bdd6020f5fd98a1d6acbc108e64f2d37a05fbb743f515a4cad" address="unix:///run/containerd/s/e061dae6f68c44f7737977a871a2944a4099ce377af57f9cb7ec7a459eff9750" protocol=ttrpc version=3 May 12 13:11:12.940485 systemd[1]: Started cri-containerd-c48fb870602ce0bdd6020f5fd98a1d6acbc108e64f2d37a05fbb743f515a4cad.scope - libcontainer container c48fb870602ce0bdd6020f5fd98a1d6acbc108e64f2d37a05fbb743f515a4cad. May 12 13:11:13.005365 containerd[1572]: time="2025-05-12T13:11:13.005324568Z" level=info msg="StartContainer for \"c48fb870602ce0bdd6020f5fd98a1d6acbc108e64f2d37a05fbb743f515a4cad\" returns successfully" May 12 13:11:13.954907 kubelet[2842]: I0512 13:11:13.954872 2842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:11:13.964001 kubelet[2842]: I0512 13:11:13.963597 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xn57t" podStartSLOduration=34.085860662 podStartE2EDuration="39.963579394s" podCreationTimestamp="2025-05-12 13:10:34 +0000 UTC" firstStartedPulling="2025-05-12 13:11:06.576478493 +0000 UTC m=+52.212825987" lastFinishedPulling="2025-05-12 13:11:12.454197225 +0000 UTC m=+58.090544719" observedRunningTime="2025-05-12 13:11:12.965846973 +0000 UTC m=+58.602194468" watchObservedRunningTime="2025-05-12 13:11:13.963579394 +0000 UTC m=+59.599926888" May 12 13:11:14.424231 systemd[1]: Started sshd@15-10.0.0.126:22-10.0.0.1:54624.service - OpenSSH per-connection server daemon (10.0.0.1:54624). May 12 13:11:14.478851 sshd[5016]: Accepted publickey for core from 10.0.0.1 port 54624 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:14.481541 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:14.486933 systemd-logind[1558]: New session 16 of user core. May 12 13:11:14.493415 systemd[1]: Started session-16.scope - Session 16 of User core. May 12 13:11:14.642216 sshd[5020]: Connection closed by 10.0.0.1 port 54624 May 12 13:11:14.642557 sshd-session[5016]: pam_unix(sshd:session): session closed for user core May 12 13:11:14.647240 systemd[1]: sshd@15-10.0.0.126:22-10.0.0.1:54624.service: Deactivated successfully. May 12 13:11:14.649700 systemd[1]: session-16.scope: Deactivated successfully. May 12 13:11:14.650689 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. May 12 13:11:14.652066 systemd-logind[1558]: Removed session 16. May 12 13:11:14.743203 containerd[1572]: time="2025-05-12T13:11:14.743081123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:14.744104 containerd[1572]: time="2025-05-12T13:11:14.744057266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 12 13:11:14.745345 containerd[1572]: time="2025-05-12T13:11:14.745306721Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:14.747504 containerd[1572]: time="2025-05-12T13:11:14.747459512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:14.747967 containerd[1572]: time="2025-05-12T13:11:14.747924063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.856021642s" May 12 13:11:14.747998 containerd[1572]: time="2025-05-12T13:11:14.747964650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 12 13:11:14.749914 containerd[1572]: time="2025-05-12T13:11:14.749889062Z" level=info msg="CreateContainer within sandbox \"101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 12 13:11:14.762162 containerd[1572]: time="2025-05-12T13:11:14.762103738Z" level=info msg="Container a166c26e03645ced22d8bb1a147acd5099c29d89845a1a4f219ae92896dde1e7: CDI devices from CRI Config.CDIDevices: []" May 12 13:11:14.799674 containerd[1572]: time="2025-05-12T13:11:14.799631516Z" level=info msg="CreateContainer within sandbox \"101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a166c26e03645ced22d8bb1a147acd5099c29d89845a1a4f219ae92896dde1e7\"" May 12 13:11:14.800357 containerd[1572]: time="2025-05-12T13:11:14.800119292Z" level=info msg="StartContainer for \"a166c26e03645ced22d8bb1a147acd5099c29d89845a1a4f219ae92896dde1e7\"" May 12 13:11:14.801566 containerd[1572]: time="2025-05-12T13:11:14.801518228Z" level=info msg="connecting to shim a166c26e03645ced22d8bb1a147acd5099c29d89845a1a4f219ae92896dde1e7" address="unix:///run/containerd/s/4f21f6116accf5a16868429054736c116de4c95e14acf4a312ba7132d51b6887" protocol=ttrpc version=3 May 12 13:11:14.873428 systemd[1]: Started cri-containerd-a166c26e03645ced22d8bb1a147acd5099c29d89845a1a4f219ae92896dde1e7.scope - libcontainer container a166c26e03645ced22d8bb1a147acd5099c29d89845a1a4f219ae92896dde1e7. May 12 13:11:14.938269 containerd[1572]: time="2025-05-12T13:11:14.938222074Z" level=info msg="StartContainer for \"a166c26e03645ced22d8bb1a147acd5099c29d89845a1a4f219ae92896dde1e7\" returns successfully" May 12 13:11:14.939380 containerd[1572]: time="2025-05-12T13:11:14.939361623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 12 13:11:14.957965 kubelet[2842]: I0512 13:11:14.957934 2842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:11:15.970756 containerd[1572]: time="2025-05-12T13:11:15.970700549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c\" id:\"71088ad94fa38efc614b215acef1c611d54a8797878b800b645b10e379bd1ce8\" pid:5082 exited_at:{seconds:1747055475 nanos:970407119}" May 12 13:11:15.987592 kubelet[2842]: I0512 13:11:15.987509 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d5d7fc86f-xt4rv" podStartSLOduration=35.801743088 podStartE2EDuration="41.987493979s" podCreationTimestamp="2025-05-12 13:10:34 +0000 UTC" firstStartedPulling="2025-05-12 13:11:06.705902994 +0000 UTC m=+52.342250488" lastFinishedPulling="2025-05-12 13:11:12.891653885 +0000 UTC m=+58.528001379" observedRunningTime="2025-05-12 13:11:13.963757158 +0000 UTC m=+59.600104652" watchObservedRunningTime="2025-05-12 13:11:15.987493979 +0000 UTC m=+61.623841473" May 12 13:11:19.655322 systemd[1]: Started sshd@16-10.0.0.126:22-10.0.0.1:53844.service - OpenSSH per-connection server daemon (10.0.0.1:53844). May 12 13:11:19.711794 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 53844 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:19.713359 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:19.717630 systemd-logind[1558]: New session 17 of user core. May 12 13:11:19.726386 systemd[1]: Started session-17.scope - Session 17 of User core. May 12 13:11:19.878353 sshd[5099]: Connection closed by 10.0.0.1 port 53844 May 12 13:11:19.879709 sshd-session[5097]: pam_unix(sshd:session): session closed for user core May 12 13:11:19.885095 systemd[1]: sshd@16-10.0.0.126:22-10.0.0.1:53844.service: Deactivated successfully. May 12 13:11:19.888054 systemd[1]: session-17.scope: Deactivated successfully. May 12 13:11:19.889966 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. May 12 13:11:19.891271 systemd-logind[1558]: Removed session 17. May 12 13:11:19.993837 containerd[1572]: time="2025-05-12T13:11:19.993476481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:19.994938 containerd[1572]: time="2025-05-12T13:11:19.994795576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 12 13:11:19.996431 containerd[1572]: time="2025-05-12T13:11:19.996408915Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:19.999271 containerd[1572]: time="2025-05-12T13:11:19.999211554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:11:19.999681 containerd[1572]: time="2025-05-12T13:11:19.999654035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 5.060268477s" May 12 13:11:19.999732 containerd[1572]: time="2025-05-12T13:11:19.999683670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 12 13:11:20.019296 containerd[1572]: time="2025-05-12T13:11:20.019202588Z" level=info msg="CreateContainer within sandbox \"101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 12 13:11:20.029189 containerd[1572]: time="2025-05-12T13:11:20.029146439Z" level=info msg="Container 4f75ad86758569b94e0e41f32ccf13a3e9f8e0451e1befc29478da105708d295: CDI devices from CRI Config.CDIDevices: []" May 12 13:11:20.039463 containerd[1572]: time="2025-05-12T13:11:20.039420738Z" level=info msg="CreateContainer within sandbox \"101306409d6dbf6c1284d6b31443d1d5fc2a7a405314d7a58c3e838382d695ed\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4f75ad86758569b94e0e41f32ccf13a3e9f8e0451e1befc29478da105708d295\"" May 12 13:11:20.039997 containerd[1572]: time="2025-05-12T13:11:20.039952226Z" level=info msg="StartContainer for \"4f75ad86758569b94e0e41f32ccf13a3e9f8e0451e1befc29478da105708d295\"" May 12 13:11:20.041485 containerd[1572]: time="2025-05-12T13:11:20.041454916Z" level=info msg="connecting to shim 4f75ad86758569b94e0e41f32ccf13a3e9f8e0451e1befc29478da105708d295" address="unix:///run/containerd/s/4f21f6116accf5a16868429054736c116de4c95e14acf4a312ba7132d51b6887" protocol=ttrpc version=3 May 12 13:11:20.087608 systemd[1]: Started cri-containerd-4f75ad86758569b94e0e41f32ccf13a3e9f8e0451e1befc29478da105708d295.scope - libcontainer container 4f75ad86758569b94e0e41f32ccf13a3e9f8e0451e1befc29478da105708d295. May 12 13:11:20.166868 containerd[1572]: time="2025-05-12T13:11:20.166801341Z" level=info msg="StartContainer for \"4f75ad86758569b94e0e41f32ccf13a3e9f8e0451e1befc29478da105708d295\" returns successfully" May 12 13:11:20.516370 kubelet[2842]: I0512 13:11:20.516314 2842 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 12 13:11:20.516370 kubelet[2842]: I0512 13:11:20.516368 2842 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 12 13:11:20.985504 kubelet[2842]: I0512 13:11:20.985439 2842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mgnmw" podStartSLOduration=34.651454334 podStartE2EDuration="46.985423339s" podCreationTimestamp="2025-05-12 13:10:34 +0000 UTC" firstStartedPulling="2025-05-12 13:11:07.670735577 +0000 UTC m=+53.307083071" lastFinishedPulling="2025-05-12 13:11:20.004704582 +0000 UTC m=+65.641052076" observedRunningTime="2025-05-12 13:11:20.98499911 +0000 UTC m=+66.621346604" watchObservedRunningTime="2025-05-12 13:11:20.985423339 +0000 UTC m=+66.621770823" May 12 13:11:21.848836 containerd[1572]: time="2025-05-12T13:11:21.848799190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8df1a95e96de4e4b7922bdb469e7e8bb391bee8cf0e805c6086102c2bf4f71f8\" id:\"3e1e242070c63273084c06eb13331a4624903002edc057e24571cc8b98a69002\" pid:5158 exited_at:{seconds:1747055481 nanos:848598781}" May 12 13:11:24.892146 systemd[1]: Started sshd@17-10.0.0.126:22-10.0.0.1:53854.service - OpenSSH per-connection server daemon (10.0.0.1:53854). May 12 13:11:24.944895 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 53854 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:24.946349 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:24.950473 systemd-logind[1558]: New session 18 of user core. May 12 13:11:24.960365 systemd[1]: Started session-18.scope - Session 18 of User core. May 12 13:11:25.077227 sshd[5178]: Connection closed by 10.0.0.1 port 53854 May 12 13:11:25.077669 sshd-session[5176]: pam_unix(sshd:session): session closed for user core May 12 13:11:25.087024 systemd[1]: sshd@17-10.0.0.126:22-10.0.0.1:53854.service: Deactivated successfully. May 12 13:11:25.089142 systemd[1]: session-18.scope: Deactivated successfully. May 12 13:11:25.089901 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. May 12 13:11:25.093047 systemd[1]: Started sshd@18-10.0.0.126:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). May 12 13:11:25.093964 systemd-logind[1558]: Removed session 18. May 12 13:11:25.150934 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:25.152424 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:25.157556 systemd-logind[1558]: New session 19 of user core. May 12 13:11:25.167412 systemd[1]: Started session-19.scope - Session 19 of User core. May 12 13:11:25.375327 sshd[5193]: Connection closed by 10.0.0.1 port 53862 May 12 13:11:25.375764 sshd-session[5191]: pam_unix(sshd:session): session closed for user core May 12 13:11:25.387987 systemd[1]: sshd@18-10.0.0.126:22-10.0.0.1:53862.service: Deactivated successfully. May 12 13:11:25.389863 systemd[1]: session-19.scope: Deactivated successfully. May 12 13:11:25.390941 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. May 12 13:11:25.395369 systemd[1]: Started sshd@19-10.0.0.126:22-10.0.0.1:53872.service - OpenSSH per-connection server daemon (10.0.0.1:53872). May 12 13:11:25.396534 systemd-logind[1558]: Removed session 19. May 12 13:11:25.452614 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 53872 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:25.454021 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:25.458727 systemd-logind[1558]: New session 20 of user core. May 12 13:11:25.465389 systemd[1]: Started session-20.scope - Session 20 of User core. May 12 13:11:26.221795 kubelet[2842]: I0512 13:11:26.221695 2842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:11:27.073268 sshd[5206]: Connection closed by 10.0.0.1 port 53872 May 12 13:11:27.076381 sshd-session[5204]: pam_unix(sshd:session): session closed for user core May 12 13:11:27.083303 systemd[1]: sshd@19-10.0.0.126:22-10.0.0.1:53872.service: Deactivated successfully. May 12 13:11:27.085494 systemd[1]: session-20.scope: Deactivated successfully. May 12 13:11:27.085760 systemd[1]: session-20.scope: Consumed 572ms CPU time, 64.4M memory peak. May 12 13:11:27.086701 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. May 12 13:11:27.093041 systemd[1]: Started sshd@20-10.0.0.126:22-10.0.0.1:53876.service - OpenSSH per-connection server daemon (10.0.0.1:53876). May 12 13:11:27.094503 systemd-logind[1558]: Removed session 20. May 12 13:11:27.162281 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 53876 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:27.163883 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:27.168270 systemd-logind[1558]: New session 21 of user core. May 12 13:11:27.180373 systemd[1]: Started session-21.scope - Session 21 of User core. May 12 13:11:27.510435 sshd[5228]: Connection closed by 10.0.0.1 port 53876 May 12 13:11:27.512455 sshd-session[5226]: pam_unix(sshd:session): session closed for user core May 12 13:11:27.522344 systemd[1]: sshd@20-10.0.0.126:22-10.0.0.1:53876.service: Deactivated successfully. May 12 13:11:27.524262 systemd[1]: session-21.scope: Deactivated successfully. May 12 13:11:27.525063 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. May 12 13:11:27.528541 systemd[1]: Started sshd@21-10.0.0.126:22-10.0.0.1:53888.service - OpenSSH per-connection server daemon (10.0.0.1:53888). May 12 13:11:27.529782 systemd-logind[1558]: Removed session 21. May 12 13:11:27.583434 sshd[5240]: Accepted publickey for core from 10.0.0.1 port 53888 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:27.585138 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:27.590923 systemd-logind[1558]: New session 22 of user core. May 12 13:11:27.596436 systemd[1]: Started session-22.scope - Session 22 of User core. May 12 13:11:27.711777 sshd[5244]: Connection closed by 10.0.0.1 port 53888 May 12 13:11:27.712075 sshd-session[5240]: pam_unix(sshd:session): session closed for user core May 12 13:11:27.716844 systemd[1]: sshd@21-10.0.0.126:22-10.0.0.1:53888.service: Deactivated successfully. May 12 13:11:27.719231 systemd[1]: session-22.scope: Deactivated successfully. May 12 13:11:27.720304 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. May 12 13:11:27.721860 systemd-logind[1558]: Removed session 22. May 12 13:11:32.725424 systemd[1]: Started sshd@22-10.0.0.126:22-10.0.0.1:56820.service - OpenSSH per-connection server daemon (10.0.0.1:56820). May 12 13:11:32.777916 sshd[5266]: Accepted publickey for core from 10.0.0.1 port 56820 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:32.779776 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:32.784482 systemd-logind[1558]: New session 23 of user core. May 12 13:11:32.795366 systemd[1]: Started session-23.scope - Session 23 of User core. May 12 13:11:32.906355 sshd[5268]: Connection closed by 10.0.0.1 port 56820 May 12 13:11:32.906642 sshd-session[5266]: pam_unix(sshd:session): session closed for user core May 12 13:11:32.910888 systemd[1]: sshd@22-10.0.0.126:22-10.0.0.1:56820.service: Deactivated successfully. May 12 13:11:32.913029 systemd[1]: session-23.scope: Deactivated successfully. May 12 13:11:32.913846 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. May 12 13:11:32.915111 systemd-logind[1558]: Removed session 23. May 12 13:11:35.439154 kubelet[2842]: I0512 13:11:35.439105 2842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:11:37.919107 systemd[1]: Started sshd@23-10.0.0.126:22-10.0.0.1:56836.service - OpenSSH per-connection server daemon (10.0.0.1:56836). May 12 13:11:37.990952 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 56836 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:37.992337 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:37.996547 systemd-logind[1558]: New session 24 of user core. May 12 13:11:38.000366 systemd[1]: Started session-24.scope - Session 24 of User core. May 12 13:11:38.133049 sshd[5285]: Connection closed by 10.0.0.1 port 56836 May 12 13:11:38.133361 sshd-session[5283]: pam_unix(sshd:session): session closed for user core May 12 13:11:38.138851 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. May 12 13:11:38.141722 systemd[1]: sshd@23-10.0.0.126:22-10.0.0.1:56836.service: Deactivated successfully. May 12 13:11:38.145795 systemd[1]: session-24.scope: Deactivated successfully. May 12 13:11:38.152360 systemd-logind[1558]: Removed session 24. May 12 13:11:43.145270 systemd[1]: Started sshd@24-10.0.0.126:22-10.0.0.1:44074.service - OpenSSH per-connection server daemon (10.0.0.1:44074). May 12 13:11:43.205552 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 44074 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:43.207289 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:43.211738 systemd-logind[1558]: New session 25 of user core. May 12 13:11:43.219414 systemd[1]: Started session-25.scope - Session 25 of User core. May 12 13:11:43.334765 sshd[5301]: Connection closed by 10.0.0.1 port 44074 May 12 13:11:43.335099 sshd-session[5299]: pam_unix(sshd:session): session closed for user core May 12 13:11:43.339672 systemd[1]: sshd@24-10.0.0.126:22-10.0.0.1:44074.service: Deactivated successfully. May 12 13:11:43.341528 systemd[1]: session-25.scope: Deactivated successfully. May 12 13:11:43.342603 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. May 12 13:11:43.343632 systemd-logind[1558]: Removed session 25. May 12 13:11:45.965922 containerd[1572]: time="2025-05-12T13:11:45.965871912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2180380a79b5037f0a4dc452bbf59191eaf95c14bc6198586a7c3f0bebdee7c\" id:\"32f3288153a7fef6e0d1aad6b9b2fa2bafe0ccaa1a5cdfd5ccfb9fbd0ac9d4db\" pid:5331 exited_at:{seconds:1747055505 nanos:965550358}" May 12 13:11:48.351311 systemd[1]: Started sshd@25-10.0.0.126:22-10.0.0.1:60218.service - OpenSSH per-connection server daemon (10.0.0.1:60218). May 12 13:11:48.426352 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 60218 ssh2: RSA SHA256:A7yuKup+HEC7fg5XYyg4V6ZDMEdOL+RguXHoZBFmKcA May 12 13:11:48.427898 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:11:48.431916 systemd-logind[1558]: New session 26 of user core. May 12 13:11:48.437397 systemd[1]: Started session-26.scope - Session 26 of User core. May 12 13:11:48.549317 sshd[5348]: Connection closed by 10.0.0.1 port 60218 May 12 13:11:48.549615 sshd-session[5346]: pam_unix(sshd:session): session closed for user core May 12 13:11:48.553826 systemd[1]: sshd@25-10.0.0.126:22-10.0.0.1:60218.service: Deactivated successfully. May 12 13:11:48.556031 systemd[1]: session-26.scope: Deactivated successfully. May 12 13:11:48.556759 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. May 12 13:11:48.558049 systemd-logind[1558]: Removed session 26.