May 9 04:56:38.916834 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 04:56:38.916854 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri May 9 03:42:00 -00 2025 May 9 04:56:38.916863 kernel: KASLR enabled May 9 04:56:38.916869 kernel: efi: EFI v2.7 by EDK II May 9 04:56:38.916874 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 9 04:56:38.916879 kernel: random: crng init done May 9 04:56:38.916886 kernel: secureboot: Secure boot disabled May 9 04:56:38.916892 kernel: ACPI: Early table checksum verification disabled May 9 04:56:38.916898 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 9 04:56:38.916905 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 04:56:38.916910 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916916 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916921 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916927 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916934 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916949 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916957 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916963 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916969 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:56:38.916975 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 04:56:38.916981 kernel: NUMA: Failed to initialise from firmware May 9 04:56:38.916987 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 04:56:38.916993 kernel: NUMA: NODE_DATA [mem 0xdc954e00-0xdc95bfff] May 9 04:56:38.916999 kernel: Zone ranges: May 9 04:56:38.917005 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 04:56:38.917018 kernel: DMA32 empty May 9 04:56:38.917024 kernel: Normal empty May 9 04:56:38.917030 kernel: Device empty May 9 04:56:38.917036 kernel: Movable zone start for each node May 9 04:56:38.917042 kernel: Early memory node ranges May 9 04:56:38.917048 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 9 04:56:38.917054 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 9 04:56:38.917059 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 9 04:56:38.917065 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 9 04:56:38.917071 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 9 04:56:38.917077 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 9 04:56:38.917083 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 9 04:56:38.917089 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 9 04:56:38.917096 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 9 04:56:38.917102 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 04:56:38.917111 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 04:56:38.917117 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 04:56:38.917124 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 04:56:38.917132 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 04:56:38.917138 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 04:56:38.917145 kernel: psci: probing for conduit method from ACPI. May 9 04:56:38.917151 kernel: psci: PSCIv1.1 detected in firmware. May 9 04:56:38.917157 kernel: psci: Using standard PSCI v0.2 function IDs May 9 04:56:38.917163 kernel: psci: Trusted OS migration not required May 9 04:56:38.917170 kernel: psci: SMC Calling Convention v1.1 May 9 04:56:38.917176 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 04:56:38.917183 kernel: percpu: Embedded 31 pages/cpu s87016 r8192 d31768 u126976 May 9 04:56:38.917189 kernel: pcpu-alloc: s87016 r8192 d31768 u126976 alloc=31*4096 May 9 04:56:38.917210 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 04:56:38.917218 kernel: Detected PIPT I-cache on CPU0 May 9 04:56:38.917225 kernel: CPU features: detected: GIC system register CPU interface May 9 04:56:38.917231 kernel: CPU features: detected: Hardware dirty bit management May 9 04:56:38.917238 kernel: CPU features: detected: Spectre-v4 May 9 04:56:38.917244 kernel: CPU features: detected: Spectre-BHB May 9 04:56:38.917250 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 04:56:38.917257 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 04:56:38.917263 kernel: CPU features: detected: ARM erratum 1418040 May 9 04:56:38.917269 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 04:56:38.917275 kernel: alternatives: applying boot alternatives May 9 04:56:38.917283 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=180634d3e256b1dbb5700949694cb34c82ca79af028365e078744f4de51d78d8 May 9 04:56:38.917291 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 04:56:38.917297 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 04:56:38.917304 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 04:56:38.917310 kernel: Fallback order for Node 0: 0 May 9 04:56:38.917316 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 04:56:38.917323 kernel: Policy zone: DMA May 9 04:56:38.917329 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 04:56:38.917335 kernel: software IO TLB: area num 4. May 9 04:56:38.917342 kernel: software IO TLB: mapped [mem 0x00000000d5000000-0x00000000d9000000] (64MB) May 9 04:56:38.917348 kernel: Memory: 2386496K/2572288K available (10432K kernel code, 2202K rwdata, 8168K rodata, 39040K init, 993K bss, 185792K reserved, 0K cma-reserved) May 9 04:56:38.917355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 04:56:38.917362 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 04:56:38.917369 kernel: rcu: RCU event tracing is enabled. May 9 04:56:38.917376 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 04:56:38.917382 kernel: Trampoline variant of Tasks RCU enabled. May 9 04:56:38.917389 kernel: Tracing variant of Tasks RCU enabled. May 9 04:56:38.917395 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 04:56:38.917402 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 04:56:38.917408 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 04:56:38.917414 kernel: GICv3: 256 SPIs implemented May 9 04:56:38.917420 kernel: GICv3: 0 Extended SPIs implemented May 9 04:56:38.917427 kernel: Root IRQ handler: gic_handle_irq May 9 04:56:38.917433 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 04:56:38.917441 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 04:56:38.917447 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 04:56:38.917454 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 9 04:56:38.917460 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 9 04:56:38.917466 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 04:56:38.917473 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 04:56:38.917479 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 04:56:38.917486 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 04:56:38.917492 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 04:56:38.917499 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 04:56:38.917506 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 04:56:38.917514 kernel: arm-pv: using stolen time PV May 9 04:56:38.917520 kernel: Console: colour dummy device 80x25 May 9 04:56:38.917527 kernel: ACPI: Core revision 20230628 May 9 04:56:38.917534 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 04:56:38.917541 kernel: pid_max: default: 32768 minimum: 301 May 9 04:56:38.917547 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 04:56:38.917554 kernel: landlock: Up and running. May 9 04:56:38.917561 kernel: SELinux: Initializing. May 9 04:56:38.917567 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 04:56:38.917575 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 04:56:38.917582 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 04:56:38.917589 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 04:56:38.917596 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 04:56:38.917602 kernel: rcu: Hierarchical SRCU implementation. May 9 04:56:38.917609 kernel: rcu: Max phase no-delay instances is 400. May 9 04:56:38.917615 kernel: Platform MSI: ITS@0x8080000 domain created May 9 04:56:38.917621 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 04:56:38.917640 kernel: Remapping and enabling EFI services. May 9 04:56:38.917648 kernel: smp: Bringing up secondary CPUs ... May 9 04:56:38.917659 kernel: Detected PIPT I-cache on CPU1 May 9 04:56:38.917666 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 04:56:38.917675 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 04:56:38.917682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 04:56:38.917688 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 04:56:38.917695 kernel: Detected PIPT I-cache on CPU2 May 9 04:56:38.917702 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 04:56:38.917709 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 04:56:38.917718 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 04:56:38.917724 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 04:56:38.917731 kernel: Detected PIPT I-cache on CPU3 May 9 04:56:38.917738 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 04:56:38.917745 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 04:56:38.917752 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 04:56:38.917759 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 04:56:38.917765 kernel: smp: Brought up 1 node, 4 CPUs May 9 04:56:38.917772 kernel: SMP: Total of 4 processors activated. May 9 04:56:38.917781 kernel: CPU features: detected: 32-bit EL0 Support May 9 04:56:38.917788 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 04:56:38.917794 kernel: CPU features: detected: Common not Private translations May 9 04:56:38.917801 kernel: CPU features: detected: CRC32 instructions May 9 04:56:38.917808 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 04:56:38.917815 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 04:56:38.917827 kernel: CPU features: detected: LSE atomic instructions May 9 04:56:38.917834 kernel: CPU features: detected: Privileged Access Never May 9 04:56:38.917841 kernel: CPU features: detected: RAS Extension Support May 9 04:56:38.917849 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 04:56:38.917856 kernel: CPU: All CPU(s) started at EL1 May 9 04:56:38.917863 kernel: alternatives: applying system-wide alternatives May 9 04:56:38.917869 kernel: devtmpfs: initialized May 9 04:56:38.917876 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 04:56:38.917883 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 04:56:38.917890 kernel: pinctrl core: initialized pinctrl subsystem May 9 04:56:38.917897 kernel: SMBIOS 3.0.0 present. May 9 04:56:38.917904 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 04:56:38.917912 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 04:56:38.917919 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 04:56:38.917926 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 04:56:38.917933 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 04:56:38.917939 kernel: audit: initializing netlink subsys (disabled) May 9 04:56:38.917951 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 9 04:56:38.917958 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 04:56:38.917965 kernel: cpuidle: using governor menu May 9 04:56:38.917972 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 04:56:38.917981 kernel: ASID allocator initialised with 32768 entries May 9 04:56:38.917988 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 04:56:38.917994 kernel: Serial: AMBA PL011 UART driver May 9 04:56:38.918001 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 04:56:38.918008 kernel: Modules: 0 pages in range for non-PLT usage May 9 04:56:38.918015 kernel: Modules: 509024 pages in range for PLT usage May 9 04:56:38.918022 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 04:56:38.918029 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 04:56:38.918036 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 04:56:38.918044 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 04:56:38.918051 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 04:56:38.918058 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 04:56:38.918064 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 04:56:38.918071 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 04:56:38.918078 kernel: ACPI: Added _OSI(Module Device) May 9 04:56:38.918085 kernel: ACPI: Added _OSI(Processor Device) May 9 04:56:38.918092 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 04:56:38.918098 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 04:56:38.918106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 04:56:38.918113 kernel: ACPI: Interpreter enabled May 9 04:56:38.918120 kernel: ACPI: Using GIC for interrupt routing May 9 04:56:38.918127 kernel: ACPI: MCFG table detected, 1 entries May 9 04:56:38.918134 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 04:56:38.918141 kernel: printk: console [ttyAMA0] enabled May 9 04:56:38.918148 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 04:56:38.918294 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 04:56:38.918370 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 04:56:38.918433 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 04:56:38.918494 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 04:56:38.918555 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 04:56:38.918564 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 04:56:38.918571 kernel: PCI host bridge to bus 0000:00 May 9 04:56:38.918641 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 04:56:38.918701 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 04:56:38.918756 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 04:56:38.918811 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 04:56:38.918888 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 04:56:38.918973 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 04:56:38.919043 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 04:56:38.919107 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 04:56:38.919174 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 04:56:38.919271 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 04:56:38.919337 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 04:56:38.919400 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 04:56:38.919459 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 04:56:38.919518 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 04:56:38.919574 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 04:56:38.919587 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 04:56:38.919594 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 04:56:38.919601 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 04:56:38.919608 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 04:56:38.919615 kernel: iommu: Default domain type: Translated May 9 04:56:38.919622 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 04:56:38.919629 kernel: efivars: Registered efivars operations May 9 04:56:38.919636 kernel: vgaarb: loaded May 9 04:56:38.919642 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 04:56:38.919651 kernel: VFS: Disk quotas dquot_6.6.0 May 9 04:56:38.919658 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 04:56:38.919665 kernel: pnp: PnP ACPI init May 9 04:56:38.919740 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 04:56:38.919750 kernel: pnp: PnP ACPI: found 1 devices May 9 04:56:38.919757 kernel: NET: Registered PF_INET protocol family May 9 04:56:38.919764 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 04:56:38.919771 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 04:56:38.919780 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 04:56:38.919787 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 04:56:38.919794 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 04:56:38.919801 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 04:56:38.919808 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 04:56:38.919815 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 04:56:38.919822 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 04:56:38.919828 kernel: PCI: CLS 0 bytes, default 64 May 9 04:56:38.919835 kernel: kvm [1]: HYP mode not available May 9 04:56:38.919844 kernel: Initialise system trusted keyrings May 9 04:56:38.919851 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 04:56:38.919858 kernel: Key type asymmetric registered May 9 04:56:38.919865 kernel: Asymmetric key parser 'x509' registered May 9 04:56:38.919873 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 9 04:56:38.919880 kernel: io scheduler mq-deadline registered May 9 04:56:38.919887 kernel: io scheduler kyber registered May 9 04:56:38.919894 kernel: io scheduler bfq registered May 9 04:56:38.919901 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 04:56:38.919910 kernel: ACPI: button: Power Button [PWRB] May 9 04:56:38.919917 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 04:56:38.919989 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 04:56:38.919998 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 04:56:38.920006 kernel: thunder_xcv, ver 1.0 May 9 04:56:38.920013 kernel: thunder_bgx, ver 1.0 May 9 04:56:38.920019 kernel: nicpf, ver 1.0 May 9 04:56:38.920026 kernel: nicvf, ver 1.0 May 9 04:56:38.920097 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 04:56:38.920160 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T04:56:38 UTC (1746766598) May 9 04:56:38.920169 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 04:56:38.920176 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 04:56:38.920183 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 04:56:38.920190 kernel: watchdog: Hard watchdog permanently disabled May 9 04:56:38.920242 kernel: NET: Registered PF_INET6 protocol family May 9 04:56:38.920249 kernel: Segment Routing with IPv6 May 9 04:56:38.920257 kernel: In-situ OAM (IOAM) with IPv6 May 9 04:56:38.920267 kernel: NET: Registered PF_PACKET protocol family May 9 04:56:38.920274 kernel: Key type dns_resolver registered May 9 04:56:38.920281 kernel: registered taskstats version 1 May 9 04:56:38.920287 kernel: Loading compiled-in X.509 certificates May 9 04:56:38.920294 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: aad33ee745b4b133d332bac6576e33058e4e0478' May 9 04:56:38.920301 kernel: Key type .fscrypt registered May 9 04:56:38.920308 kernel: Key type fscrypt-provisioning registered May 9 04:56:38.920315 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 04:56:38.920322 kernel: ima: Allocated hash algorithm: sha1 May 9 04:56:38.920330 kernel: ima: No architecture policies found May 9 04:56:38.920337 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 04:56:38.920344 kernel: clk: Disabling unused clocks May 9 04:56:38.920350 kernel: Warning: unable to open an initial console. May 9 04:56:38.920357 kernel: Freeing unused kernel memory: 39040K May 9 04:56:38.920364 kernel: Run /init as init process May 9 04:56:38.920371 kernel: with arguments: May 9 04:56:38.920378 kernel: /init May 9 04:56:38.920384 kernel: with environment: May 9 04:56:38.920393 kernel: HOME=/ May 9 04:56:38.920399 kernel: TERM=linux May 9 04:56:38.920406 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 04:56:38.920414 systemd[1]: Successfully made /usr/ read-only. May 9 04:56:38.920424 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 04:56:38.920432 systemd[1]: Detected virtualization kvm. May 9 04:56:38.920439 systemd[1]: Detected architecture arm64. May 9 04:56:38.920448 systemd[1]: Running in initrd. May 9 04:56:38.920456 systemd[1]: No hostname configured, using default hostname. May 9 04:56:38.920463 systemd[1]: Hostname set to . May 9 04:56:38.920471 systemd[1]: Initializing machine ID from VM UUID. May 9 04:56:38.920478 systemd[1]: Queued start job for default target initrd.target. May 9 04:56:38.920485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 04:56:38.920493 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 04:56:38.920501 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 04:56:38.920510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 04:56:38.920528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 04:56:38.920537 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 04:56:38.920545 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 04:56:38.920554 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 04:56:38.920561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 04:56:38.920569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 04:56:38.920578 systemd[1]: Reached target paths.target - Path Units. May 9 04:56:38.920585 systemd[1]: Reached target slices.target - Slice Units. May 9 04:56:38.920593 systemd[1]: Reached target swap.target - Swaps. May 9 04:56:38.920600 systemd[1]: Reached target timers.target - Timer Units. May 9 04:56:38.920608 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 04:56:38.920615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 04:56:38.920623 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 04:56:38.920630 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 9 04:56:38.920638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 04:56:38.920647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 04:56:38.920654 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 04:56:38.920662 systemd[1]: Reached target sockets.target - Socket Units. May 9 04:56:38.920669 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 04:56:38.920677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 04:56:38.920684 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 04:56:38.920692 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 9 04:56:38.920700 systemd[1]: Starting systemd-fsck-usr.service... May 9 04:56:38.920709 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 04:56:38.920716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 04:56:38.920724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 04:56:38.920731 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 04:56:38.920739 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 04:56:38.920748 systemd[1]: Finished systemd-fsck-usr.service. May 9 04:56:38.920755 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 04:56:38.920781 systemd-journald[238]: Collecting audit messages is disabled. May 9 04:56:38.920801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:56:38.920810 systemd-journald[238]: Journal started May 9 04:56:38.920828 systemd-journald[238]: Runtime Journal (/run/log/journal/faec6086458242b6b42272c559cde624) is 5.9M, max 47.3M, 41.4M free. May 9 04:56:38.912381 systemd-modules-load[240]: Inserted module 'overlay' May 9 04:56:38.930070 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 04:56:38.930088 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 04:56:38.930777 systemd-modules-load[240]: Inserted module 'br_netfilter' May 9 04:56:38.933361 kernel: Bridge firewalling registered May 9 04:56:38.933379 systemd[1]: Started systemd-journald.service - Journal Service. May 9 04:56:38.934527 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 04:56:38.940889 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 04:56:38.944697 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 04:56:38.946286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 04:56:38.955896 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 04:56:38.957433 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 04:56:38.962048 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 04:56:38.964383 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 9 04:56:38.964439 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 04:56:38.967535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 04:56:38.969507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 04:56:38.972785 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 04:56:38.978790 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=180634d3e256b1dbb5700949694cb34c82ca79af028365e078744f4de51d78d8 May 9 04:56:39.005626 systemd-resolved[287]: Positive Trust Anchors: May 9 04:56:39.005644 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 04:56:39.005676 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 04:56:39.010366 systemd-resolved[287]: Defaulting to hostname 'linux'. May 9 04:56:39.011376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 04:56:39.014901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 04:56:39.053224 kernel: SCSI subsystem initialized May 9 04:56:39.057217 kernel: Loading iSCSI transport class v2.0-870. May 9 04:56:39.066229 kernel: iscsi: registered transport (tcp) May 9 04:56:39.077498 kernel: iscsi: registered transport (qla4xxx) May 9 04:56:39.077520 kernel: QLogic iSCSI HBA Driver May 9 04:56:39.093569 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 04:56:39.115338 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 04:56:39.116872 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 04:56:39.163246 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 04:56:39.165556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 04:56:39.230235 kernel: raid6: neonx8 gen() 15782 MB/s May 9 04:56:39.247234 kernel: raid6: neonx4 gen() 15766 MB/s May 9 04:56:39.264216 kernel: raid6: neonx2 gen() 13176 MB/s May 9 04:56:39.281219 kernel: raid6: neonx1 gen() 10505 MB/s May 9 04:56:39.298216 kernel: raid6: int64x8 gen() 6783 MB/s May 9 04:56:39.315226 kernel: raid6: int64x4 gen() 7333 MB/s May 9 04:56:39.332220 kernel: raid6: int64x2 gen() 6108 MB/s May 9 04:56:39.349345 kernel: raid6: int64x1 gen() 5037 MB/s May 9 04:56:39.349369 kernel: raid6: using algorithm neonx8 gen() 15782 MB/s May 9 04:56:39.367272 kernel: raid6: .... xor() 11924 MB/s, rmw enabled May 9 04:56:39.367285 kernel: raid6: using neon recovery algorithm May 9 04:56:39.372511 kernel: xor: measuring software checksum speed May 9 04:56:39.372527 kernel: 8regs : 20744 MB/sec May 9 04:56:39.373222 kernel: 32regs : 21670 MB/sec May 9 04:56:39.374474 kernel: arm64_neon : 23556 MB/sec May 9 04:56:39.374485 kernel: xor: using function: arm64_neon (23556 MB/sec) May 9 04:56:39.424222 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 04:56:39.430403 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 04:56:39.432784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 04:56:39.460393 systemd-udevd[492]: Using default interface naming scheme 'v255'. May 9 04:56:39.464570 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 04:56:39.467114 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 04:56:39.491393 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation May 9 04:56:39.513083 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 04:56:39.515473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 04:56:39.576304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 04:56:39.579259 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 04:56:39.618915 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 04:56:39.620020 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 04:56:39.621384 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 04:56:39.622521 kernel: GPT:9289727 != 19775487 May 9 04:56:39.622551 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 04:56:39.622561 kernel: GPT:9289727 != 19775487 May 9 04:56:39.623562 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 04:56:39.624449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 04:56:39.635368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 04:56:39.635491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:56:39.640723 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 04:56:39.643656 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 04:56:39.646218 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (555) May 9 04:56:39.648900 kernel: BTRFS: device fsid 40f1eae7-2721-4eea-912a-4692becebc68 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (551) May 9 04:56:39.663888 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 04:56:39.666412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:56:39.673253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 04:56:39.682005 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 04:56:39.689964 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 04:56:39.696218 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 04:56:39.697382 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 04:56:39.700360 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 04:56:39.702531 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 04:56:39.704576 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 04:56:39.707157 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 04:56:39.708962 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 04:56:39.722763 disk-uuid[584]: Primary Header is updated. May 9 04:56:39.722763 disk-uuid[584]: Secondary Entries is updated. May 9 04:56:39.722763 disk-uuid[584]: Secondary Header is updated. May 9 04:56:39.726230 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 04:56:39.734536 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 04:56:40.736798 disk-uuid[589]: The operation has completed successfully. May 9 04:56:40.737964 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 04:56:40.764592 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 04:56:40.764715 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 04:56:40.788034 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 04:56:40.803693 sh[604]: Success May 9 04:56:40.820453 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 04:56:40.820491 kernel: device-mapper: uevent: version 1.0.3 May 9 04:56:40.822166 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 04:56:40.831244 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 04:56:40.857915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 04:56:40.860674 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 04:56:40.875168 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 04:56:40.882911 kernel: BTRFS info (device dm-0): first mount of filesystem 40f1eae7-2721-4eea-912a-4692becebc68 May 9 04:56:40.882950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 04:56:40.882967 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 04:56:40.884908 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 04:56:40.884941 kernel: BTRFS info (device dm-0): using free space tree May 9 04:56:40.888714 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 04:56:40.890040 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 9 04:56:40.891468 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 04:56:40.892236 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 04:56:40.893833 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 04:56:40.923757 kernel: BTRFS info (device vda6): first mount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:56:40.923805 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 04:56:40.924647 kernel: BTRFS info (device vda6): using free space tree May 9 04:56:40.927373 kernel: BTRFS info (device vda6): auto enabling async discard May 9 04:56:40.932231 kernel: BTRFS info (device vda6): last unmount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:56:40.935260 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 04:56:40.937469 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 04:56:40.997468 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 04:56:41.003798 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 04:56:41.057094 systemd-networkd[791]: lo: Link UP May 9 04:56:41.057106 systemd-networkd[791]: lo: Gained carrier May 9 04:56:41.057895 systemd-networkd[791]: Enumeration completed May 9 04:56:41.058245 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 04:56:41.058399 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 04:56:41.058403 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 04:56:41.059030 systemd-networkd[791]: eth0: Link UP May 9 04:56:41.059033 systemd-networkd[791]: eth0: Gained carrier May 9 04:56:41.059040 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 04:56:41.059718 systemd[1]: Reached target network.target - Network. May 9 04:56:41.076246 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 04:56:41.087847 ignition[695]: Ignition 2.21.0 May 9 04:56:41.087862 ignition[695]: Stage: fetch-offline May 9 04:56:41.087894 ignition[695]: no configs at "/usr/lib/ignition/base.d" May 9 04:56:41.087902 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:56:41.088244 ignition[695]: parsed url from cmdline: "" May 9 04:56:41.088247 ignition[695]: no config URL provided May 9 04:56:41.088252 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" May 9 04:56:41.088260 ignition[695]: no config at "/usr/lib/ignition/user.ign" May 9 04:56:41.088286 ignition[695]: op(1): [started] loading QEMU firmware config module May 9 04:56:41.088290 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 04:56:41.101211 ignition[695]: op(1): [finished] loading QEMU firmware config module May 9 04:56:41.137915 ignition[695]: parsing config with SHA512: a5732592678a1065d175d025f946ad5c641869750f392abd04c656b9cc3ea9531c0b45527bf1de2184600d3b4bfab115527628eeecda70d96be2d0022276431e May 9 04:56:41.141797 unknown[695]: fetched base config from "system" May 9 04:56:41.141811 unknown[695]: fetched user config from "qemu" May 9 04:56:41.142143 ignition[695]: fetch-offline: fetch-offline passed May 9 04:56:41.144351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 04:56:41.142211 ignition[695]: Ignition finished successfully May 9 04:56:41.146277 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 04:56:41.147005 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 04:56:41.172241 ignition[806]: Ignition 2.21.0 May 9 04:56:41.172256 ignition[806]: Stage: kargs May 9 04:56:41.172390 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 9 04:56:41.172399 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:56:41.174212 ignition[806]: kargs: kargs passed May 9 04:56:41.174681 ignition[806]: Ignition finished successfully May 9 04:56:41.178824 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 04:56:41.180658 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 04:56:41.201433 ignition[815]: Ignition 2.21.0 May 9 04:56:41.201450 ignition[815]: Stage: disks May 9 04:56:41.201588 ignition[815]: no configs at "/usr/lib/ignition/base.d" May 9 04:56:41.201597 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:56:41.203532 ignition[815]: disks: disks passed May 9 04:56:41.203589 ignition[815]: Ignition finished successfully May 9 04:56:41.205389 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 04:56:41.207223 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 04:56:41.208836 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 04:56:41.210810 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 04:56:41.212685 systemd[1]: Reached target sysinit.target - System Initialization. May 9 04:56:41.214353 systemd[1]: Reached target basic.target - Basic System. May 9 04:56:41.216773 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 04:56:41.237479 systemd-fsck[825]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 9 04:56:41.241103 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 04:56:41.243395 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 04:56:41.296216 kernel: EXT4-fs (vda9): mounted filesystem 6dc42008-f956-4b63-8173-09d769f43317 r/w with ordered data mode. Quota mode: none. May 9 04:56:41.296620 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 04:56:41.297837 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 04:56:41.300116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 04:56:41.301685 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 04:56:41.302625 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 04:56:41.302678 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 04:56:41.302700 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 04:56:41.320590 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 04:56:41.323087 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 04:56:41.329007 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (833) May 9 04:56:41.329034 kernel: BTRFS info (device vda6): first mount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:56:41.329044 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 04:56:41.329053 kernel: BTRFS info (device vda6): using free space tree May 9 04:56:41.329062 kernel: BTRFS info (device vda6): auto enabling async discard May 9 04:56:41.330821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 04:56:41.371317 initrd-setup-root[857]: cut: /sysroot/etc/passwd: No such file or directory May 9 04:56:41.375482 initrd-setup-root[864]: cut: /sysroot/etc/group: No such file or directory May 9 04:56:41.379130 initrd-setup-root[871]: cut: /sysroot/etc/shadow: No such file or directory May 9 04:56:41.382901 initrd-setup-root[878]: cut: /sysroot/etc/gshadow: No such file or directory May 9 04:56:41.449322 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 04:56:41.451636 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 04:56:41.453154 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 04:56:41.471265 kernel: BTRFS info (device vda6): last unmount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:56:41.483261 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 04:56:41.494567 ignition[946]: INFO : Ignition 2.21.0 May 9 04:56:41.494567 ignition[946]: INFO : Stage: mount May 9 04:56:41.496168 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 04:56:41.496168 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:56:41.499361 ignition[946]: INFO : mount: mount passed May 9 04:56:41.499361 ignition[946]: INFO : Ignition finished successfully May 9 04:56:41.498745 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 04:56:41.501134 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 04:56:42.025027 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 04:56:42.026495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 04:56:42.043972 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (959) May 9 04:56:42.044002 kernel: BTRFS info (device vda6): first mount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:56:42.044012 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 04:56:42.044896 kernel: BTRFS info (device vda6): using free space tree May 9 04:56:42.047214 kernel: BTRFS info (device vda6): auto enabling async discard May 9 04:56:42.048540 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 04:56:42.075366 ignition[977]: INFO : Ignition 2.21.0 May 9 04:56:42.075366 ignition[977]: INFO : Stage: files May 9 04:56:42.076851 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 04:56:42.076851 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:56:42.076851 ignition[977]: DEBUG : files: compiled without relabeling support, skipping May 9 04:56:42.080296 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 04:56:42.080296 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 04:56:42.080296 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 04:56:42.080296 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 04:56:42.080296 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 04:56:42.079890 unknown[977]: wrote ssh authorized keys file for user: core May 9 04:56:42.087529 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 04:56:42.087529 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 04:56:42.330678 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 04:56:42.497201 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 04:56:42.499336 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 04:56:42.818840 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 04:56:42.973325 systemd-networkd[791]: eth0: Gained IPv6LL May 9 04:56:43.060816 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 04:56:43.060816 ignition[977]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 04:56:43.064384 ignition[977]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 04:56:43.064384 ignition[977]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 04:56:43.064384 ignition[977]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 04:56:43.064384 ignition[977]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 9 04:56:43.064384 ignition[977]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 04:56:43.064384 ignition[977]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 04:56:43.064384 ignition[977]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 9 04:56:43.064384 ignition[977]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 9 04:56:43.077959 ignition[977]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 04:56:43.081213 ignition[977]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 04:56:43.082729 ignition[977]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 9 04:56:43.082729 ignition[977]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 9 04:56:43.082729 ignition[977]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 9 04:56:43.082729 ignition[977]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 04:56:43.082729 ignition[977]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 04:56:43.082729 ignition[977]: INFO : files: files passed May 9 04:56:43.082729 ignition[977]: INFO : Ignition finished successfully May 9 04:56:43.084474 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 04:56:43.087800 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 04:56:43.090336 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 04:56:43.098843 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 04:56:43.098927 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 04:56:43.104124 initrd-setup-root-after-ignition[1005]: grep: /sysroot/oem/oem-release: No such file or directory May 9 04:56:43.105421 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 04:56:43.105421 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 04:56:43.108870 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 04:56:43.110434 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 04:56:43.111732 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 04:56:43.114274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 04:56:43.157406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 04:56:43.157559 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 04:56:43.159795 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 04:56:43.161701 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 04:56:43.168734 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 04:56:43.169622 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 04:56:43.202257 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 04:56:43.204614 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 04:56:43.221873 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 04:56:43.224132 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 04:56:43.225393 systemd[1]: Stopped target timers.target - Timer Units. May 9 04:56:43.227147 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 04:56:43.227284 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 04:56:43.229825 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 04:56:43.231815 systemd[1]: Stopped target basic.target - Basic System. May 9 04:56:43.233417 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 04:56:43.235086 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 04:56:43.237003 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 04:56:43.238940 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 9 04:56:43.240817 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 04:56:43.242612 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 04:56:43.244475 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 04:56:43.246349 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 04:56:43.248025 systemd[1]: Stopped target swap.target - Swaps. May 9 04:56:43.249510 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 04:56:43.249628 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 04:56:43.251913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 04:56:43.253818 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 04:56:43.255673 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 04:56:43.259290 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 04:56:43.260540 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 04:56:43.260652 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 04:56:43.263382 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 04:56:43.263498 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 04:56:43.265427 systemd[1]: Stopped target paths.target - Path Units. May 9 04:56:43.266956 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 04:56:43.270260 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 04:56:43.271523 systemd[1]: Stopped target slices.target - Slice Units. May 9 04:56:43.273550 systemd[1]: Stopped target sockets.target - Socket Units. May 9 04:56:43.275113 systemd[1]: iscsid.socket: Deactivated successfully. May 9 04:56:43.275219 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 04:56:43.276764 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 04:56:43.276848 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 04:56:43.278360 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 04:56:43.278471 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 04:56:43.280191 systemd[1]: ignition-files.service: Deactivated successfully. May 9 04:56:43.280311 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 04:56:43.282597 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 04:56:43.283462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 04:56:43.283597 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 04:56:43.300735 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 04:56:43.301614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 04:56:43.301757 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 04:56:43.303563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 04:56:43.303666 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 04:56:43.309229 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 04:56:43.309324 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 04:56:43.316897 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 04:56:43.318022 ignition[1032]: INFO : Ignition 2.21.0 May 9 04:56:43.318022 ignition[1032]: INFO : Stage: umount May 9 04:56:43.318022 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 04:56:43.318022 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:56:43.323539 ignition[1032]: INFO : umount: umount passed May 9 04:56:43.323539 ignition[1032]: INFO : Ignition finished successfully May 9 04:56:43.323820 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 04:56:43.323921 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 04:56:43.325491 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 04:56:43.325566 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 04:56:43.327431 systemd[1]: Stopped target network.target - Network. May 9 04:56:43.328663 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 04:56:43.328727 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 04:56:43.330523 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 04:56:43.330569 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 04:56:43.332260 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 04:56:43.332302 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 04:56:43.333896 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 04:56:43.333945 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 04:56:43.335511 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 04:56:43.335556 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 04:56:43.337346 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 04:56:43.339128 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 04:56:43.345050 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 04:56:43.346302 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 04:56:43.350083 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 9 04:56:43.350400 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 04:56:43.350436 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 04:56:43.353791 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 9 04:56:43.353986 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 04:56:43.354098 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 04:56:43.356849 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 9 04:56:43.357237 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 9 04:56:43.358366 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 04:56:43.358405 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 04:56:43.361304 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 04:56:43.362484 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 04:56:43.362541 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 04:56:43.365585 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 04:56:43.365631 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 04:56:43.368377 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 04:56:43.368419 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 04:56:43.370441 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 04:56:43.374678 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 9 04:56:43.390362 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 04:56:43.390479 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 04:56:43.393612 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 04:56:43.393690 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 04:56:43.395100 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 04:56:43.395161 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 04:56:43.396753 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 04:56:43.396790 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 04:56:43.398452 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 04:56:43.398497 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 04:56:43.401056 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 04:56:43.401099 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 04:56:43.403794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 04:56:43.403835 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 04:56:43.407264 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 04:56:43.408478 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 9 04:56:43.408532 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 9 04:56:43.411185 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 04:56:43.411239 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 04:56:43.414408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 04:56:43.414448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:56:43.421647 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 04:56:43.421731 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 04:56:43.423348 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 04:56:43.425783 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 04:56:43.446607 systemd[1]: Switching root. May 9 04:56:43.477173 systemd-journald[238]: Journal stopped May 9 04:56:44.213327 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 9 04:56:44.213380 kernel: SELinux: policy capability network_peer_controls=1 May 9 04:56:44.213398 kernel: SELinux: policy capability open_perms=1 May 9 04:56:44.213407 kernel: SELinux: policy capability extended_socket_class=1 May 9 04:56:44.213419 kernel: SELinux: policy capability always_check_network=0 May 9 04:56:44.213428 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 04:56:44.213437 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 04:56:44.213446 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 04:56:44.213454 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 04:56:44.213465 kernel: audit: type=1403 audit(1746766603.618:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 04:56:44.213477 systemd[1]: Successfully loaded SELinux policy in 30.069ms. May 9 04:56:44.213497 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.476ms. May 9 04:56:44.213510 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 04:56:44.213521 systemd[1]: Detected virtualization kvm. May 9 04:56:44.213531 systemd[1]: Detected architecture arm64. May 9 04:56:44.213542 systemd[1]: Detected first boot. May 9 04:56:44.213556 systemd[1]: Initializing machine ID from VM UUID. May 9 04:56:44.213565 kernel: NET: Registered PF_VSOCK protocol family May 9 04:56:44.213575 zram_generator::config[1076]: No configuration found. May 9 04:56:44.213586 systemd[1]: Populated /etc with preset unit settings. May 9 04:56:44.213600 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 9 04:56:44.213610 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 04:56:44.213620 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 04:56:44.213633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 04:56:44.213643 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 04:56:44.213653 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 04:56:44.213663 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 04:56:44.213673 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 04:56:44.213683 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 04:56:44.213692 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 04:56:44.213703 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 04:56:44.213713 systemd[1]: Created slice user.slice - User and Session Slice. May 9 04:56:44.213724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 04:56:44.213735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 04:56:44.213746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 04:56:44.213756 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 04:56:44.213766 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 04:56:44.213777 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 04:56:44.213787 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 04:56:44.213797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 04:56:44.213807 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 04:56:44.213819 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 04:56:44.213829 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 04:56:44.213839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 04:56:44.213850 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 04:56:44.213860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 04:56:44.213870 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 04:56:44.213880 systemd[1]: Reached target slices.target - Slice Units. May 9 04:56:44.213890 systemd[1]: Reached target swap.target - Swaps. May 9 04:56:44.213909 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 04:56:44.213921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 04:56:44.213932 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 9 04:56:44.213942 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 04:56:44.213952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 04:56:44.213962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 04:56:44.213973 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 04:56:44.213983 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 04:56:44.213993 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 04:56:44.214005 systemd[1]: Mounting media.mount - External Media Directory... May 9 04:56:44.214015 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 04:56:44.214025 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 04:56:44.214035 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 04:56:44.214045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 04:56:44.214055 systemd[1]: Reached target machines.target - Containers. May 9 04:56:44.214065 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 04:56:44.214076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 04:56:44.214088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 04:56:44.214098 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 04:56:44.214108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 04:56:44.214117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 04:56:44.214128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 04:56:44.214138 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 04:56:44.214148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 04:56:44.214158 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 04:56:44.214168 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 04:56:44.214180 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 04:56:44.214278 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 04:56:44.214291 systemd[1]: Stopped systemd-fsck-usr.service. May 9 04:56:44.214302 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 04:56:44.214313 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 04:56:44.214323 kernel: fuse: init (API version 7.39) May 9 04:56:44.214332 kernel: loop: module loaded May 9 04:56:44.214342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 04:56:44.214351 kernel: ACPI: bus type drm_connector registered May 9 04:56:44.214364 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 04:56:44.214374 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 04:56:44.214384 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 9 04:56:44.214394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 04:56:44.214406 systemd[1]: verity-setup.service: Deactivated successfully. May 9 04:56:44.214416 systemd[1]: Stopped verity-setup.service. May 9 04:56:44.214449 systemd-journald[1151]: Collecting audit messages is disabled. May 9 04:56:44.214471 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 04:56:44.214482 systemd-journald[1151]: Journal started May 9 04:56:44.214503 systemd-journald[1151]: Runtime Journal (/run/log/journal/faec6086458242b6b42272c559cde624) is 5.9M, max 47.3M, 41.4M free. May 9 04:56:43.996993 systemd[1]: Queued start job for default target multi-user.target. May 9 04:56:44.216869 systemd[1]: Started systemd-journald.service - Journal Service. May 9 04:56:44.006033 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 04:56:44.006424 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 04:56:44.216707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 04:56:44.218365 systemd[1]: Mounted media.mount - External Media Directory. May 9 04:56:44.220116 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 04:56:44.221399 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 04:56:44.222607 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 04:56:44.223811 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 04:56:44.226424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 04:56:44.228016 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 04:56:44.228184 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 04:56:44.229572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 04:56:44.229734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 04:56:44.231136 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 04:56:44.231391 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 04:56:44.232714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 04:56:44.232879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 04:56:44.234362 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 04:56:44.234515 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 04:56:44.236016 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 04:56:44.236185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 04:56:44.237536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 04:56:44.238966 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 04:56:44.240535 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 04:56:44.242134 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 9 04:56:44.254842 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 04:56:44.257537 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 04:56:44.259605 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 04:56:44.260744 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 04:56:44.260780 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 04:56:44.262741 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 9 04:56:44.270952 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 04:56:44.272162 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 04:56:44.273429 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 04:56:44.275340 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 04:56:44.276579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 04:56:44.280374 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 04:56:44.281457 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 04:56:44.283649 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 04:56:44.287749 systemd-journald[1151]: Time spent on flushing to /var/log/journal/faec6086458242b6b42272c559cde624 is 10.950ms for 875 entries. May 9 04:56:44.287749 systemd-journald[1151]: System Journal (/var/log/journal/faec6086458242b6b42272c559cde624) is 8M, max 195.6M, 187.6M free. May 9 04:56:44.344567 systemd-journald[1151]: Received client request to flush runtime journal. May 9 04:56:44.344625 kernel: loop0: detected capacity change from 0 to 107312 May 9 04:56:44.344647 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 04:56:44.290336 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 04:56:44.293884 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 04:56:44.296651 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 04:56:44.298163 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 04:56:44.304359 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 04:56:44.337727 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 04:56:44.341430 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 04:56:44.344614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 04:56:44.348933 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 04:56:44.350660 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 04:56:44.355962 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 04:56:44.368768 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 9 04:56:44.385428 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 9 04:56:44.387569 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 9 04:56:44.387587 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 9 04:56:44.391231 kernel: loop1: detected capacity change from 0 to 138376 May 9 04:56:44.393304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 04:56:44.446218 kernel: loop2: detected capacity change from 0 to 194096 May 9 04:56:44.488235 kernel: loop3: detected capacity change from 0 to 107312 May 9 04:56:44.493559 kernel: loop4: detected capacity change from 0 to 138376 May 9 04:56:44.501740 kernel: loop5: detected capacity change from 0 to 194096 May 9 04:56:44.505931 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 04:56:44.506594 (sd-merge)[1215]: Merged extensions into '/usr'. May 9 04:56:44.510133 systemd[1]: Reload requested from client PID 1192 ('systemd-sysext') (unit systemd-sysext.service)... May 9 04:56:44.510153 systemd[1]: Reloading... May 9 04:56:44.572219 zram_generator::config[1241]: No configuration found. May 9 04:56:44.581570 ldconfig[1187]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 04:56:44.643619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 04:56:44.706221 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 04:56:44.706796 systemd[1]: Reloading finished in 196 ms. May 9 04:56:44.732233 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 04:56:44.733667 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 04:56:44.753402 systemd[1]: Starting ensure-sysext.service... May 9 04:56:44.755054 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 04:56:44.761038 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 04:56:44.768360 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 04:56:44.771520 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... May 9 04:56:44.771619 systemd[1]: Reloading... May 9 04:56:44.772610 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 9 04:56:44.772641 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 9 04:56:44.772864 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 04:56:44.773065 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 04:56:44.773716 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 04:56:44.773922 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 9 04:56:44.773969 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 9 04:56:44.777165 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. May 9 04:56:44.777171 systemd-tmpfiles[1276]: Skipping /boot May 9 04:56:44.787143 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. May 9 04:56:44.787482 systemd-tmpfiles[1276]: Skipping /boot May 9 04:56:44.812300 systemd-udevd[1279]: Using default interface naming scheme 'v255'. May 9 04:56:44.818217 zram_generator::config[1303]: No configuration found. May 9 04:56:44.914035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 04:56:44.935256 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1335) May 9 04:56:44.991382 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 04:56:44.991867 systemd[1]: Reloading finished in 219 ms. May 9 04:56:45.004728 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 04:56:45.015388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 04:56:45.047243 systemd[1]: Finished ensure-sysext.service. May 9 04:56:45.050015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 04:56:45.056086 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 04:56:45.058602 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 04:56:45.059816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 04:56:45.066790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 04:56:45.069945 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 04:56:45.072752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 04:56:45.075638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 04:56:45.077289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 04:56:45.078122 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 04:56:45.079251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 04:56:45.083314 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 04:56:45.085834 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 04:56:45.092322 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 04:56:45.094700 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 04:56:45.097428 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 04:56:45.101679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 04:56:45.103782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 04:56:45.111384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 04:56:45.113005 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 04:56:45.113169 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 04:56:45.114727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 04:56:45.114881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 04:56:45.117682 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 04:56:45.117833 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 04:56:45.119527 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 04:56:45.121479 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 04:56:45.125739 augenrules[1422]: No rules May 9 04:56:45.126995 systemd[1]: audit-rules.service: Deactivated successfully. May 9 04:56:45.127219 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 04:56:45.131454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 04:56:45.131560 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 04:56:45.132811 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 04:56:45.135486 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 04:56:45.138208 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 04:56:45.146219 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 04:56:45.149447 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 04:56:45.152000 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 04:56:45.154221 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:56:45.177998 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 04:56:45.239369 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 04:56:45.240754 systemd[1]: Reached target time-set.target - System Time Set. May 9 04:56:45.243740 systemd-networkd[1402]: lo: Link UP May 9 04:56:45.243748 systemd-networkd[1402]: lo: Gained carrier May 9 04:56:45.244662 systemd-networkd[1402]: Enumeration completed May 9 04:56:45.244752 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 04:56:45.245059 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 04:56:45.245065 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 04:56:45.245490 systemd-networkd[1402]: eth0: Link UP May 9 04:56:45.245496 systemd-networkd[1402]: eth0: Gained carrier May 9 04:56:45.245510 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 04:56:45.247340 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 9 04:56:45.249500 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 04:56:45.252112 systemd-resolved[1404]: Positive Trust Anchors: May 9 04:56:45.252139 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 04:56:45.252169 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 04:56:45.259954 systemd-resolved[1404]: Defaulting to hostname 'linux'. May 9 04:56:45.261299 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 04:56:45.261535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 04:56:45.261872 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. May 9 04:56:45.262879 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 04:56:45.262937 systemd-timesyncd[1407]: Initial clock synchronization to Fri 2025-05-09 04:56:45.492656 UTC. May 9 04:56:45.262957 systemd[1]: Reached target network.target - Network. May 9 04:56:45.263834 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 04:56:45.266068 systemd[1]: Reached target sysinit.target - System Initialization. May 9 04:56:45.268292 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 04:56:45.269824 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 04:56:45.271519 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 04:56:45.272605 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 04:56:45.273803 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 04:56:45.275021 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 04:56:45.275056 systemd[1]: Reached target paths.target - Path Units. May 9 04:56:45.275949 systemd[1]: Reached target timers.target - Timer Units. May 9 04:56:45.277651 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 04:56:45.279904 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 04:56:45.282916 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 9 04:56:45.284305 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 9 04:56:45.285518 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 9 04:56:45.292044 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 04:56:45.293598 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 9 04:56:45.295426 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 9 04:56:45.296765 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 04:56:45.299020 systemd[1]: Reached target sockets.target - Socket Units. May 9 04:56:45.300003 systemd[1]: Reached target basic.target - Basic System. May 9 04:56:45.300977 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 04:56:45.301012 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 04:56:45.301997 systemd[1]: Starting containerd.service - containerd container runtime... May 9 04:56:45.303879 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 04:56:45.314649 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 04:56:45.316691 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 04:56:45.318656 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 04:56:45.319658 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 04:56:45.320646 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 04:56:45.325291 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 04:56:45.326704 jq[1461]: false May 9 04:56:45.327250 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 04:56:45.329253 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 04:56:45.334667 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 04:56:45.336475 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 04:56:45.336879 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 04:56:45.337907 systemd[1]: Starting update-engine.service - Update Engine... May 9 04:56:45.339883 extend-filesystems[1462]: Found loop3 May 9 04:56:45.342388 extend-filesystems[1462]: Found loop4 May 9 04:56:45.342388 extend-filesystems[1462]: Found loop5 May 9 04:56:45.342388 extend-filesystems[1462]: Found vda May 9 04:56:45.342388 extend-filesystems[1462]: Found vda1 May 9 04:56:45.342388 extend-filesystems[1462]: Found vda2 May 9 04:56:45.342388 extend-filesystems[1462]: Found vda3 May 9 04:56:45.342388 extend-filesystems[1462]: Found usr May 9 04:56:45.342388 extend-filesystems[1462]: Found vda4 May 9 04:56:45.342388 extend-filesystems[1462]: Found vda6 May 9 04:56:45.342388 extend-filesystems[1462]: Found vda7 May 9 04:56:45.342388 extend-filesystems[1462]: Found vda9 May 9 04:56:45.342388 extend-filesystems[1462]: Checking size of /dev/vda9 May 9 04:56:45.340413 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 04:56:45.344979 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 04:56:45.349971 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 04:56:45.362654 jq[1476]: true May 9 04:56:45.351287 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 04:56:45.376436 extend-filesystems[1462]: Resized partition /dev/vda9 May 9 04:56:45.351591 systemd[1]: motdgen.service: Deactivated successfully. May 9 04:56:45.351742 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 04:56:45.358600 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 04:56:45.358819 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 04:56:45.380200 extend-filesystems[1486]: resize2fs 1.47.2 (1-Jan-2025) May 9 04:56:45.382675 jq[1485]: true May 9 04:56:45.390231 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1315) May 9 04:56:45.400726 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 04:56:45.403399 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 04:56:45.411280 tar[1482]: linux-arm64/helm May 9 04:56:45.457011 dbus-daemon[1459]: [system] SELinux support is enabled May 9 04:56:45.459316 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 04:56:45.462435 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 04:56:45.462465 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 04:56:45.463784 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 04:56:45.463802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 04:56:45.480233 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 04:56:45.480515 update_engine[1474]: I20250509 04:56:45.480367 1474 main.cc:92] Flatcar Update Engine starting May 9 04:56:45.493552 update_engine[1474]: I20250509 04:56:45.482348 1474 update_check_scheduler.cc:74] Next update check in 8m0s May 9 04:56:45.482291 systemd[1]: Started update-engine.service - Update Engine. May 9 04:56:45.485175 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 04:56:45.493899 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 04:56:45.493899 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 04:56:45.493899 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 04:56:45.493870 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (Power Button) May 9 04:56:45.498908 extend-filesystems[1462]: Resized filesystem in /dev/vda9 May 9 04:56:45.494088 systemd-logind[1470]: New seat seat0. May 9 04:56:45.499765 systemd[1]: Started systemd-logind.service - User Login Management. May 9 04:56:45.502318 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 04:56:45.502542 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 04:56:45.503425 bash[1514]: Updated "/home/core/.ssh/authorized_keys" May 9 04:56:45.506577 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 04:56:45.511897 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 04:56:45.545573 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 04:56:45.641220 containerd[1496]: time="2025-05-09T04:56:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 9 04:56:45.642465 containerd[1496]: time="2025-05-09T04:56:45.642432600Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 9 04:56:45.653183 containerd[1496]: time="2025-05-09T04:56:45.653143480Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.56µs" May 9 04:56:45.653183 containerd[1496]: time="2025-05-09T04:56:45.653178200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 9 04:56:45.653269 containerd[1496]: time="2025-05-09T04:56:45.653205960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 9 04:56:45.653360 containerd[1496]: time="2025-05-09T04:56:45.653331840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 9 04:56:45.653360 containerd[1496]: time="2025-05-09T04:56:45.653353000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 9 04:56:45.653444 containerd[1496]: time="2025-05-09T04:56:45.653376640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 04:56:45.653444 containerd[1496]: time="2025-05-09T04:56:45.653421720Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 04:56:45.653444 containerd[1496]: time="2025-05-09T04:56:45.653432520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 04:56:45.653634 containerd[1496]: time="2025-05-09T04:56:45.653613040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 04:56:45.653634 containerd[1496]: time="2025-05-09T04:56:45.653633880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 04:56:45.653683 containerd[1496]: time="2025-05-09T04:56:45.653644920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 04:56:45.653683 containerd[1496]: time="2025-05-09T04:56:45.653652760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 9 04:56:45.653727 containerd[1496]: time="2025-05-09T04:56:45.653715880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 9 04:56:45.653912 containerd[1496]: time="2025-05-09T04:56:45.653881840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 04:56:45.653947 containerd[1496]: time="2025-05-09T04:56:45.653925760Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 04:56:45.653947 containerd[1496]: time="2025-05-09T04:56:45.653936960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 9 04:56:45.654041 containerd[1496]: time="2025-05-09T04:56:45.653966920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 9 04:56:45.654187 containerd[1496]: time="2025-05-09T04:56:45.654151360Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 9 04:56:45.654245 containerd[1496]: time="2025-05-09T04:56:45.654230800Z" level=info msg="metadata content store policy set" policy=shared May 9 04:56:45.657590 containerd[1496]: time="2025-05-09T04:56:45.657553120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 9 04:56:45.657590 containerd[1496]: time="2025-05-09T04:56:45.657594160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 9 04:56:45.657673 containerd[1496]: time="2025-05-09T04:56:45.657606920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 9 04:56:45.657673 containerd[1496]: time="2025-05-09T04:56:45.657618240Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 9 04:56:45.657673 containerd[1496]: time="2025-05-09T04:56:45.657630320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 9 04:56:45.657673 containerd[1496]: time="2025-05-09T04:56:45.657646640Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 9 04:56:45.657673 containerd[1496]: time="2025-05-09T04:56:45.657658000Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 9 04:56:45.657673 containerd[1496]: time="2025-05-09T04:56:45.657668160Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 9 04:56:45.657775 containerd[1496]: time="2025-05-09T04:56:45.657677840Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 9 04:56:45.657775 containerd[1496]: time="2025-05-09T04:56:45.657687200Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 9 04:56:45.657775 containerd[1496]: time="2025-05-09T04:56:45.657695920Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 9 04:56:45.657775 containerd[1496]: time="2025-05-09T04:56:45.657707880Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 9 04:56:45.657843 containerd[1496]: time="2025-05-09T04:56:45.657811520Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 9 04:56:45.657843 containerd[1496]: time="2025-05-09T04:56:45.657830640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 9 04:56:45.657882 containerd[1496]: time="2025-05-09T04:56:45.657849800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 9 04:56:45.657882 containerd[1496]: time="2025-05-09T04:56:45.657860880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 9 04:56:45.657882 containerd[1496]: time="2025-05-09T04:56:45.657870000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 9 04:56:45.657882 containerd[1496]: time="2025-05-09T04:56:45.657880080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 9 04:56:45.657960 containerd[1496]: time="2025-05-09T04:56:45.657910360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 9 04:56:45.657960 containerd[1496]: time="2025-05-09T04:56:45.657922000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 9 04:56:45.657960 containerd[1496]: time="2025-05-09T04:56:45.657932720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 9 04:56:45.657960 containerd[1496]: time="2025-05-09T04:56:45.657942720Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 9 04:56:45.657960 containerd[1496]: time="2025-05-09T04:56:45.657952200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 9 04:56:45.659256 containerd[1496]: time="2025-05-09T04:56:45.659224920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 9 04:56:45.659383 containerd[1496]: time="2025-05-09T04:56:45.659266400Z" level=info msg="Start snapshots syncer" May 9 04:56:45.659383 containerd[1496]: time="2025-05-09T04:56:45.659291240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 9 04:56:45.659765 containerd[1496]: time="2025-05-09T04:56:45.659726680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 9 04:56:45.659863 containerd[1496]: time="2025-05-09T04:56:45.659782400Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 9 04:56:45.659863 containerd[1496]: time="2025-05-09T04:56:45.659849600Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.659959000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.659987120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.659998640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.660008720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.660019120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.660029440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.660038920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.660061760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.660071920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 9 04:56:45.660092 containerd[1496]: time="2025-05-09T04:56:45.660087240Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660112480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660126160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660134280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660142680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660149600Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660157920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660167480Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660261200Z" level=info msg="runtime interface created" May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660267280Z" level=info msg="created NRI interface" May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660275560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660286480Z" level=info msg="Connect containerd service" May 9 04:56:45.660445 containerd[1496]: time="2025-05-09T04:56:45.660310760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 04:56:45.660851 containerd[1496]: time="2025-05-09T04:56:45.660825280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 04:56:45.764259 containerd[1496]: time="2025-05-09T04:56:45.764012800Z" level=info msg="Start subscribing containerd event" May 9 04:56:45.764259 containerd[1496]: time="2025-05-09T04:56:45.764203920Z" level=info msg="Start recovering state" May 9 04:56:45.764364 containerd[1496]: time="2025-05-09T04:56:45.764307920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 04:56:45.764364 containerd[1496]: time="2025-05-09T04:56:45.764351240Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 04:56:45.764776 containerd[1496]: time="2025-05-09T04:56:45.764524800Z" level=info msg="Start event monitor" May 9 04:56:45.764776 containerd[1496]: time="2025-05-09T04:56:45.764558560Z" level=info msg="Start cni network conf syncer for default" May 9 04:56:45.764776 containerd[1496]: time="2025-05-09T04:56:45.764567320Z" level=info msg="Start streaming server" May 9 04:56:45.764776 containerd[1496]: time="2025-05-09T04:56:45.764576160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 9 04:56:45.764776 containerd[1496]: time="2025-05-09T04:56:45.764583120Z" level=info msg="runtime interface starting up..." May 9 04:56:45.764776 containerd[1496]: time="2025-05-09T04:56:45.764588600Z" level=info msg="starting plugins..." May 9 04:56:45.764776 containerd[1496]: time="2025-05-09T04:56:45.764602320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 9 04:56:45.765317 systemd[1]: Started containerd.service - containerd container runtime. May 9 04:56:45.767663 containerd[1496]: time="2025-05-09T04:56:45.766717840Z" level=info msg="containerd successfully booted in 0.125977s" May 9 04:56:45.808673 tar[1482]: linux-arm64/LICENSE May 9 04:56:45.808673 tar[1482]: linux-arm64/README.md May 9 04:56:45.829300 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 04:56:46.266988 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 04:56:46.285885 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 04:56:46.288939 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 04:56:46.306782 systemd[1]: issuegen.service: Deactivated successfully. May 9 04:56:46.306983 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 04:56:46.309721 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 04:56:46.330394 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 04:56:46.333156 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 04:56:46.335301 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 04:56:46.336720 systemd[1]: Reached target getty.target - Login Prompts. May 9 04:56:46.814448 systemd-networkd[1402]: eth0: Gained IPv6LL May 9 04:56:46.818261 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 04:56:46.819976 systemd[1]: Reached target network-online.target - Network is Online. May 9 04:56:46.822484 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 04:56:46.824758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:56:46.837538 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 04:56:46.851196 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 04:56:46.852514 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 04:56:46.854086 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 04:56:46.854502 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 04:56:47.330882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:56:47.332523 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 04:56:47.334082 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 04:56:47.335314 systemd[1]: Startup finished in 2.198s (kernel) + 4.909s (initrd) + 3.749s (userspace) = 10.857s. May 9 04:56:47.806484 kubelet[1586]: E0509 04:56:47.806368 1586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 04:56:47.808825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 04:56:47.808972 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 04:56:47.809287 systemd[1]: kubelet.service: Consumed 815ms CPU time, 242.5M memory peak. May 9 04:56:52.017558 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 04:56:52.018639 systemd[1]: Started sshd@0-10.0.0.63:22-10.0.0.1:57398.service - OpenSSH per-connection server daemon (10.0.0.1:57398). May 9 04:56:52.101868 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 57398 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:56:52.103747 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:56:52.111242 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 04:56:52.112114 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 04:56:52.117364 systemd-logind[1470]: New session 1 of user core. May 9 04:56:52.137899 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 04:56:52.140302 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 04:56:52.161358 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 04:56:52.163732 systemd-logind[1470]: New session c1 of user core. May 9 04:56:52.268784 systemd[1604]: Queued start job for default target default.target. May 9 04:56:52.277121 systemd[1604]: Created slice app.slice - User Application Slice. May 9 04:56:52.277149 systemd[1604]: Reached target paths.target - Paths. May 9 04:56:52.277184 systemd[1604]: Reached target timers.target - Timers. May 9 04:56:52.278356 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 04:56:52.286965 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 04:56:52.287023 systemd[1604]: Reached target sockets.target - Sockets. May 9 04:56:52.287058 systemd[1604]: Reached target basic.target - Basic System. May 9 04:56:52.287086 systemd[1604]: Reached target default.target - Main User Target. May 9 04:56:52.287111 systemd[1604]: Startup finished in 117ms. May 9 04:56:52.287285 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 04:56:52.288573 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 04:56:52.349917 systemd[1]: Started sshd@1-10.0.0.63:22-10.0.0.1:57402.service - OpenSSH per-connection server daemon (10.0.0.1:57402). May 9 04:56:52.401540 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 57402 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:56:52.402735 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:56:52.407309 systemd-logind[1470]: New session 2 of user core. May 9 04:56:52.417392 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 04:56:52.469902 sshd[1617]: Connection closed by 10.0.0.1 port 57402 May 9 04:56:52.470324 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 9 04:56:52.480242 systemd[1]: sshd@1-10.0.0.63:22-10.0.0.1:57402.service: Deactivated successfully. May 9 04:56:52.483423 systemd[1]: session-2.scope: Deactivated successfully. May 9 04:56:52.484973 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. May 9 04:56:52.486422 systemd[1]: Started sshd@2-10.0.0.63:22-10.0.0.1:39552.service - OpenSSH per-connection server daemon (10.0.0.1:39552). May 9 04:56:52.487253 systemd-logind[1470]: Removed session 2. May 9 04:56:52.539429 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 39552 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:56:52.540515 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:56:52.545304 systemd-logind[1470]: New session 3 of user core. May 9 04:56:52.556366 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 04:56:52.604700 sshd[1625]: Connection closed by 10.0.0.1 port 39552 May 9 04:56:52.604996 sshd-session[1622]: pam_unix(sshd:session): session closed for user core May 9 04:56:52.614308 systemd[1]: sshd@2-10.0.0.63:22-10.0.0.1:39552.service: Deactivated successfully. May 9 04:56:52.615648 systemd[1]: session-3.scope: Deactivated successfully. May 9 04:56:52.616269 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. May 9 04:56:52.617897 systemd[1]: Started sshd@3-10.0.0.63:22-10.0.0.1:39554.service - OpenSSH per-connection server daemon (10.0.0.1:39554). May 9 04:56:52.618755 systemd-logind[1470]: Removed session 3. May 9 04:56:52.673787 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 39554 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:56:52.674931 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:56:52.678850 systemd-logind[1470]: New session 4 of user core. May 9 04:56:52.689387 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 04:56:52.740100 sshd[1633]: Connection closed by 10.0.0.1 port 39554 May 9 04:56:52.740582 sshd-session[1630]: pam_unix(sshd:session): session closed for user core May 9 04:56:52.751370 systemd[1]: sshd@3-10.0.0.63:22-10.0.0.1:39554.service: Deactivated successfully. May 9 04:56:52.752732 systemd[1]: session-4.scope: Deactivated successfully. May 9 04:56:52.754007 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. May 9 04:56:52.755070 systemd[1]: Started sshd@4-10.0.0.63:22-10.0.0.1:39564.service - OpenSSH per-connection server daemon (10.0.0.1:39564). May 9 04:56:52.755962 systemd-logind[1470]: Removed session 4. May 9 04:56:52.814783 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 39564 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:56:52.815975 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:56:52.820812 systemd-logind[1470]: New session 5 of user core. May 9 04:56:52.833359 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 04:56:52.897919 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 04:56:52.898499 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 04:56:52.912948 sudo[1642]: pam_unix(sudo:session): session closed for user root May 9 04:56:52.915627 sshd[1641]: Connection closed by 10.0.0.1 port 39564 May 9 04:56:52.915140 sshd-session[1638]: pam_unix(sshd:session): session closed for user core May 9 04:56:52.935858 systemd[1]: sshd@4-10.0.0.63:22-10.0.0.1:39564.service: Deactivated successfully. May 9 04:56:52.940919 systemd[1]: session-5.scope: Deactivated successfully. May 9 04:56:52.946333 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. May 9 04:56:52.949588 systemd[1]: Started sshd@5-10.0.0.63:22-10.0.0.1:39566.service - OpenSSH per-connection server daemon (10.0.0.1:39566). May 9 04:56:52.951073 systemd-logind[1470]: Removed session 5. May 9 04:56:53.032125 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 39566 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:56:53.033461 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:56:53.039691 systemd-logind[1470]: New session 6 of user core. May 9 04:56:53.052374 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 04:56:53.105311 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 04:56:53.105588 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 04:56:53.108624 sudo[1652]: pam_unix(sudo:session): session closed for user root May 9 04:56:53.112815 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 04:56:53.113064 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 04:56:53.124749 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 04:56:53.156489 augenrules[1674]: No rules May 9 04:56:53.157320 systemd[1]: audit-rules.service: Deactivated successfully. May 9 04:56:53.157505 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 04:56:53.158324 sudo[1651]: pam_unix(sudo:session): session closed for user root May 9 04:56:53.159630 sshd[1650]: Connection closed by 10.0.0.1 port 39566 May 9 04:56:53.159930 sshd-session[1647]: pam_unix(sshd:session): session closed for user core May 9 04:56:53.174124 systemd[1]: sshd@5-10.0.0.63:22-10.0.0.1:39566.service: Deactivated successfully. May 9 04:56:53.176414 systemd[1]: session-6.scope: Deactivated successfully. May 9 04:56:53.177587 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. May 9 04:56:53.178597 systemd[1]: Started sshd@6-10.0.0.63:22-10.0.0.1:39568.service - OpenSSH per-connection server daemon (10.0.0.1:39568). May 9 04:56:53.179536 systemd-logind[1470]: Removed session 6. May 9 04:56:53.239663 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 39568 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:56:53.240997 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:56:53.245293 systemd-logind[1470]: New session 7 of user core. May 9 04:56:53.259388 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 04:56:53.309799 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 04:56:53.310060 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 04:56:53.676784 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 04:56:53.688525 (dockerd)[1706]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 04:56:53.945731 dockerd[1706]: time="2025-05-09T04:56:53.945461086Z" level=info msg="Starting up" May 9 04:56:53.947422 dockerd[1706]: time="2025-05-09T04:56:53.946894074Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 9 04:56:54.051914 dockerd[1706]: time="2025-05-09T04:56:54.051868183Z" level=info msg="Loading containers: start." May 9 04:56:54.060765 kernel: Initializing XFRM netlink socket May 9 04:56:54.252761 systemd-networkd[1402]: docker0: Link UP May 9 04:56:54.256239 dockerd[1706]: time="2025-05-09T04:56:54.255742497Z" level=info msg="Loading containers: done." May 9 04:56:54.270099 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2776831630-merged.mount: Deactivated successfully. May 9 04:56:54.271195 dockerd[1706]: time="2025-05-09T04:56:54.270815124Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 04:56:54.271195 dockerd[1706]: time="2025-05-09T04:56:54.270938490Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 9 04:56:54.271195 dockerd[1706]: time="2025-05-09T04:56:54.271036730Z" level=info msg="Initializing buildkit" May 9 04:56:54.290773 dockerd[1706]: time="2025-05-09T04:56:54.290689290Z" level=info msg="Completed buildkit initialization" May 9 04:56:54.297885 dockerd[1706]: time="2025-05-09T04:56:54.297848073Z" level=info msg="Daemon has completed initialization" May 9 04:56:54.298108 dockerd[1706]: time="2025-05-09T04:56:54.297943809Z" level=info msg="API listen on /run/docker.sock" May 9 04:56:54.298313 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 04:56:55.206952 containerd[1496]: time="2025-05-09T04:56:55.206904350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 04:56:55.778837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054890794.mount: Deactivated successfully. May 9 04:56:56.843348 containerd[1496]: time="2025-05-09T04:56:56.843297892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:56.843817 containerd[1496]: time="2025-05-09T04:56:56.843782447Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 9 04:56:56.844543 containerd[1496]: time="2025-05-09T04:56:56.844512363Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:56.846835 containerd[1496]: time="2025-05-09T04:56:56.846806815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:56.848172 containerd[1496]: time="2025-05-09T04:56:56.848007180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.641062412s" May 9 04:56:56.848172 containerd[1496]: time="2025-05-09T04:56:56.848044218Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 9 04:56:56.863227 containerd[1496]: time="2025-05-09T04:56:56.863137726Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 04:56:58.031254 containerd[1496]: time="2025-05-09T04:56:58.030681864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:58.031841 containerd[1496]: time="2025-05-09T04:56:58.031819127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 9 04:56:58.032648 containerd[1496]: time="2025-05-09T04:56:58.032628384Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:58.035237 containerd[1496]: time="2025-05-09T04:56:58.034804826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:58.036566 containerd[1496]: time="2025-05-09T04:56:58.036532406Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.173359106s" May 9 04:56:58.036604 containerd[1496]: time="2025-05-09T04:56:58.036565798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 9 04:56:58.051301 containerd[1496]: time="2025-05-09T04:56:58.051259984Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 04:56:58.059317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 04:56:58.060867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:56:58.169931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:56:58.173623 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 04:56:58.234347 kubelet[2008]: E0509 04:56:58.234290 2008 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 04:56:58.237640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 04:56:58.237786 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 04:56:58.238065 systemd[1]: kubelet.service: Consumed 139ms CPU time, 97.2M memory peak. May 9 04:56:58.966108 containerd[1496]: time="2025-05-09T04:56:58.966064385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:58.966855 containerd[1496]: time="2025-05-09T04:56:58.966782436Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 9 04:56:58.968794 containerd[1496]: time="2025-05-09T04:56:58.967438008Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:58.970181 containerd[1496]: time="2025-05-09T04:56:58.970145788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:56:58.971916 containerd[1496]: time="2025-05-09T04:56:58.971886040Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 920.590814ms" May 9 04:56:58.972032 containerd[1496]: time="2025-05-09T04:56:58.972014258Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 9 04:56:58.987158 containerd[1496]: time="2025-05-09T04:56:58.987112126Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 04:56:59.865712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111674866.mount: Deactivated successfully. May 9 04:57:00.168502 containerd[1496]: time="2025-05-09T04:57:00.168453416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:00.169169 containerd[1496]: time="2025-05-09T04:57:00.169118068Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 9 04:57:00.169727 containerd[1496]: time="2025-05-09T04:57:00.169706383Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:00.172360 containerd[1496]: time="2025-05-09T04:57:00.171425888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:00.172360 containerd[1496]: time="2025-05-09T04:57:00.172077402Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.184742928s" May 9 04:57:00.172360 containerd[1496]: time="2025-05-09T04:57:00.172107174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 9 04:57:00.186763 containerd[1496]: time="2025-05-09T04:57:00.186738112Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 04:57:00.644928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719096063.mount: Deactivated successfully. May 9 04:57:01.144590 containerd[1496]: time="2025-05-09T04:57:01.144544599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:01.145384 containerd[1496]: time="2025-05-09T04:57:01.144923301Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 9 04:57:01.146322 containerd[1496]: time="2025-05-09T04:57:01.146295681Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:01.148433 containerd[1496]: time="2025-05-09T04:57:01.148404104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:01.150041 containerd[1496]: time="2025-05-09T04:57:01.149560489Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 962.598689ms" May 9 04:57:01.150041 containerd[1496]: time="2025-05-09T04:57:01.149870687Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 9 04:57:01.166683 containerd[1496]: time="2025-05-09T04:57:01.166644706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 04:57:01.579461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019150120.mount: Deactivated successfully. May 9 04:57:01.583701 containerd[1496]: time="2025-05-09T04:57:01.583660796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:01.584070 containerd[1496]: time="2025-05-09T04:57:01.584028095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 9 04:57:01.584703 containerd[1496]: time="2025-05-09T04:57:01.584673748Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:01.589636 containerd[1496]: time="2025-05-09T04:57:01.588649622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:01.589636 containerd[1496]: time="2025-05-09T04:57:01.589350569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 422.662777ms" May 9 04:57:01.589636 containerd[1496]: time="2025-05-09T04:57:01.589378838Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 9 04:57:01.605801 containerd[1496]: time="2025-05-09T04:57:01.605770782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 04:57:02.083393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021917753.mount: Deactivated successfully. May 9 04:57:03.368585 containerd[1496]: time="2025-05-09T04:57:03.368536775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:03.370046 containerd[1496]: time="2025-05-09T04:57:03.369569270Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 9 04:57:03.370855 containerd[1496]: time="2025-05-09T04:57:03.370818929Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:03.373363 containerd[1496]: time="2025-05-09T04:57:03.373330521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:03.374449 containerd[1496]: time="2025-05-09T04:57:03.374397439Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.768480903s" May 9 04:57:03.374449 containerd[1496]: time="2025-05-09T04:57:03.374431058Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 9 04:57:07.952695 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:57:07.952835 systemd[1]: kubelet.service: Consumed 139ms CPU time, 97.2M memory peak. May 9 04:57:07.956126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:57:07.977401 systemd[1]: Reload requested from client PID 2260 ('systemctl') (unit session-7.scope)... May 9 04:57:07.977414 systemd[1]: Reloading... May 9 04:57:08.040230 zram_generator::config[2306]: No configuration found. May 9 04:57:08.108372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 04:57:08.193515 systemd[1]: Reloading finished in 215 ms. May 9 04:57:08.233886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:57:08.236837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:57:08.237300 systemd[1]: kubelet.service: Deactivated successfully. May 9 04:57:08.237495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:57:08.237538 systemd[1]: kubelet.service: Consumed 84ms CPU time, 82.4M memory peak. May 9 04:57:08.238890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:57:08.347131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:57:08.351144 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 04:57:08.389033 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 04:57:08.389033 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 04:57:08.389033 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 04:57:08.389415 kubelet[2351]: I0509 04:57:08.389223 2351 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 04:57:09.012007 kubelet[2351]: I0509 04:57:09.011949 2351 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 04:57:09.012007 kubelet[2351]: I0509 04:57:09.011977 2351 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 04:57:09.012190 kubelet[2351]: I0509 04:57:09.012169 2351 server.go:927] "Client rotation is on, will bootstrap in background" May 9 04:57:09.044877 kubelet[2351]: I0509 04:57:09.044839 2351 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 04:57:09.044877 kubelet[2351]: E0509 04:57:09.044872 2351 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.055339 kubelet[2351]: I0509 04:57:09.053933 2351 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 04:57:09.055339 kubelet[2351]: I0509 04:57:09.055120 2351 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 04:57:09.055516 kubelet[2351]: I0509 04:57:09.055165 2351 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 04:57:09.055600 kubelet[2351]: I0509 04:57:09.055580 2351 topology_manager.go:138] "Creating topology manager with none policy" May 9 04:57:09.055600 kubelet[2351]: I0509 04:57:09.055594 2351 container_manager_linux.go:301] "Creating device plugin manager" May 9 04:57:09.056086 kubelet[2351]: I0509 04:57:09.056053 2351 state_mem.go:36] "Initialized new in-memory state store" May 9 04:57:09.058706 kubelet[2351]: I0509 04:57:09.058682 2351 kubelet.go:400] "Attempting to sync node with API server" May 9 04:57:09.058706 kubelet[2351]: I0509 04:57:09.058705 2351 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 04:57:09.059023 kubelet[2351]: I0509 04:57:09.059002 2351 kubelet.go:312] "Adding apiserver pod source" May 9 04:57:09.059157 kubelet[2351]: I0509 04:57:09.059142 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 04:57:09.060112 kubelet[2351]: W0509 04:57:09.060020 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.060112 kubelet[2351]: E0509 04:57:09.060072 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.060309 kubelet[2351]: W0509 04:57:09.060124 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.060309 kubelet[2351]: E0509 04:57:09.060150 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.060361 kubelet[2351]: I0509 04:57:09.060338 2351 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 9 04:57:09.060694 kubelet[2351]: I0509 04:57:09.060681 2351 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 04:57:09.060850 kubelet[2351]: W0509 04:57:09.060839 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 04:57:09.061689 kubelet[2351]: I0509 04:57:09.061664 2351 server.go:1264] "Started kubelet" May 9 04:57:09.065088 kubelet[2351]: I0509 04:57:09.062849 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 04:57:09.065088 kubelet[2351]: I0509 04:57:09.062828 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 04:57:09.065088 kubelet[2351]: I0509 04:57:09.063089 2351 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 04:57:09.065088 kubelet[2351]: I0509 04:57:09.063127 2351 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 04:57:09.065088 kubelet[2351]: I0509 04:57:09.064101 2351 server.go:455] "Adding debug handlers to kubelet server" May 9 04:57:09.065088 kubelet[2351]: E0509 04:57:09.064017 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183dc2fe8eb9be36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 04:57:09.061639734 +0000 UTC m=+0.706987058,LastTimestamp:2025-05-09 04:57:09.061639734 +0000 UTC m=+0.706987058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 04:57:09.066431 kubelet[2351]: E0509 04:57:09.066408 2351 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:57:09.066602 kubelet[2351]: I0509 04:57:09.066590 2351 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 04:57:09.067075 kubelet[2351]: I0509 04:57:09.067047 2351 reconciler.go:26] "Reconciler: start to sync state" May 9 04:57:09.067173 kubelet[2351]: I0509 04:57:09.067159 2351 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 04:57:09.068529 kubelet[2351]: W0509 04:57:09.068073 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.068529 kubelet[2351]: E0509 04:57:09.068119 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.068879 kubelet[2351]: I0509 04:57:09.068859 2351 factory.go:221] Registration of the systemd container factory successfully May 9 04:57:09.069035 kubelet[2351]: I0509 04:57:09.069016 2351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 04:57:09.069680 kubelet[2351]: E0509 04:57:09.069141 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="200ms" May 9 04:57:09.069680 kubelet[2351]: E0509 04:57:09.069258 2351 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 04:57:09.071414 kubelet[2351]: I0509 04:57:09.071379 2351 factory.go:221] Registration of the containerd container factory successfully May 9 04:57:09.078233 kubelet[2351]: I0509 04:57:09.078185 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 04:57:09.079236 kubelet[2351]: I0509 04:57:09.079218 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 04:57:09.079318 kubelet[2351]: I0509 04:57:09.079308 2351 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 04:57:09.079401 kubelet[2351]: I0509 04:57:09.079390 2351 kubelet.go:2337] "Starting kubelet main sync loop" May 9 04:57:09.079486 kubelet[2351]: E0509 04:57:09.079469 2351 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 04:57:09.082938 kubelet[2351]: W0509 04:57:09.082896 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.083032 kubelet[2351]: E0509 04:57:09.083019 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.083854 kubelet[2351]: I0509 04:57:09.083827 2351 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 04:57:09.083939 kubelet[2351]: I0509 04:57:09.083927 2351 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 04:57:09.084016 kubelet[2351]: I0509 04:57:09.084007 2351 state_mem.go:36] "Initialized new in-memory state store" May 9 04:57:09.168351 kubelet[2351]: I0509 04:57:09.168301 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 04:57:09.168718 kubelet[2351]: E0509 04:57:09.168684 2351 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" May 9 04:57:09.179956 kubelet[2351]: E0509 04:57:09.179911 2351 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 04:57:09.190324 kubelet[2351]: I0509 04:57:09.190288 2351 policy_none.go:49] "None policy: Start" May 9 04:57:09.190998 kubelet[2351]: I0509 04:57:09.190975 2351 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 04:57:09.191043 kubelet[2351]: I0509 04:57:09.191003 2351 state_mem.go:35] "Initializing new in-memory state store" May 9 04:57:09.197239 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 04:57:09.208881 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 04:57:09.213630 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 04:57:09.223946 kubelet[2351]: I0509 04:57:09.223904 2351 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 04:57:09.224157 kubelet[2351]: I0509 04:57:09.224107 2351 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 04:57:09.224269 kubelet[2351]: I0509 04:57:09.224249 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 04:57:09.226599 kubelet[2351]: E0509 04:57:09.226576 2351 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 04:57:09.270481 kubelet[2351]: E0509 04:57:09.270358 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="400ms" May 9 04:57:09.370769 kubelet[2351]: I0509 04:57:09.370728 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 04:57:09.371115 kubelet[2351]: E0509 04:57:09.371074 2351 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" May 9 04:57:09.380834 kubelet[2351]: I0509 04:57:09.380781 2351 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 04:57:09.381821 kubelet[2351]: I0509 04:57:09.381786 2351 topology_manager.go:215] "Topology Admit Handler" podUID="c2ccfc2ee4d73dd967943644e1787751" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 04:57:09.382528 kubelet[2351]: I0509 04:57:09.382506 2351 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 04:57:09.388924 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 9 04:57:09.399358 systemd[1]: Created slice kubepods-burstable-podc2ccfc2ee4d73dd967943644e1787751.slice - libcontainer container kubepods-burstable-podc2ccfc2ee4d73dd967943644e1787751.slice. May 9 04:57:09.415427 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 9 04:57:09.469301 kubelet[2351]: I0509 04:57:09.469252 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:09.469301 kubelet[2351]: I0509 04:57:09.469293 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 04:57:09.469633 kubelet[2351]: I0509 04:57:09.469313 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2ccfc2ee4d73dd967943644e1787751-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2ccfc2ee4d73dd967943644e1787751\") " pod="kube-system/kube-apiserver-localhost" May 9 04:57:09.469633 kubelet[2351]: I0509 04:57:09.469337 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2ccfc2ee4d73dd967943644e1787751-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2ccfc2ee4d73dd967943644e1787751\") " pod="kube-system/kube-apiserver-localhost" May 9 04:57:09.469633 kubelet[2351]: I0509 04:57:09.469355 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2ccfc2ee4d73dd967943644e1787751-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2ccfc2ee4d73dd967943644e1787751\") " pod="kube-system/kube-apiserver-localhost" May 9 04:57:09.469633 kubelet[2351]: I0509 04:57:09.469389 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:09.469633 kubelet[2351]: I0509 04:57:09.469409 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:09.469747 kubelet[2351]: I0509 04:57:09.469435 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:09.469747 kubelet[2351]: I0509 04:57:09.469453 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:09.670832 kubelet[2351]: E0509 04:57:09.670777 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="800ms" May 9 04:57:09.697780 containerd[1496]: time="2025-05-09T04:57:09.697732711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 9 04:57:09.714835 containerd[1496]: time="2025-05-09T04:57:09.714796772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2ccfc2ee4d73dd967943644e1787751,Namespace:kube-system,Attempt:0,}" May 9 04:57:09.718550 containerd[1496]: time="2025-05-09T04:57:09.718374532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 9 04:57:09.772234 kubelet[2351]: I0509 04:57:09.772174 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 04:57:09.772563 kubelet[2351]: E0509 04:57:09.772523 2351 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" May 9 04:57:09.948694 kubelet[2351]: W0509 04:57:09.948544 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.948694 kubelet[2351]: E0509 04:57:09.948616 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.992350 kubelet[2351]: W0509 04:57:09.992292 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:09.992350 kubelet[2351]: E0509 04:57:09.992344 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:10.202118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202440637.mount: Deactivated successfully. May 9 04:57:10.206485 containerd[1496]: time="2025-05-09T04:57:10.206445660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 04:57:10.207669 containerd[1496]: time="2025-05-09T04:57:10.207640090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 04:57:10.208845 containerd[1496]: time="2025-05-09T04:57:10.208707853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 9 04:57:10.209409 containerd[1496]: time="2025-05-09T04:57:10.209381357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 9 04:57:10.210609 containerd[1496]: time="2025-05-09T04:57:10.210564174Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 04:57:10.212018 containerd[1496]: time="2025-05-09T04:57:10.211760527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 9 04:57:10.213351 containerd[1496]: time="2025-05-09T04:57:10.213327191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 04:57:10.215226 containerd[1496]: time="2025-05-09T04:57:10.215062692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 514.994435ms" May 9 04:57:10.215592 containerd[1496]: time="2025-05-09T04:57:10.215549619Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 498.779102ms" May 9 04:57:10.215928 containerd[1496]: time="2025-05-09T04:57:10.215875919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 04:57:10.216657 containerd[1496]: time="2025-05-09T04:57:10.216504570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 496.499671ms" May 9 04:57:10.233650 containerd[1496]: time="2025-05-09T04:57:10.233609284Z" level=info msg="connecting to shim 4f173f8f2f19a007bf4c39e498586e31d2da872a37750fb2c1e305879b86a861" address="unix:///run/containerd/s/64d5dc026e5506f225f49ddcc64cfc62e4a75ef0976c1765110ba084fba44768" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:10.240047 containerd[1496]: time="2025-05-09T04:57:10.240009215Z" level=info msg="connecting to shim 6d891709db64dd4eb1a72380e2326fb0ccffe03bf74e66cde18c8af4546c1036" address="unix:///run/containerd/s/dbd4f129bca9449cae4f75f3e0c152014e85ba64b69bf40354107c3d1010bdf7" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:10.243151 containerd[1496]: time="2025-05-09T04:57:10.241769585Z" level=info msg="connecting to shim 5c413c9a2fd3e6dc971ca0cd9ed37e0a8ab67f2c61e83908a4c7d858ce3757f1" address="unix:///run/containerd/s/a0ad7ced319bb71be84339063bb181895fdd400173f3ac4d06fcab45847169ea" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:10.257372 systemd[1]: Started cri-containerd-4f173f8f2f19a007bf4c39e498586e31d2da872a37750fb2c1e305879b86a861.scope - libcontainer container 4f173f8f2f19a007bf4c39e498586e31d2da872a37750fb2c1e305879b86a861. May 9 04:57:10.263311 systemd[1]: Started cri-containerd-5c413c9a2fd3e6dc971ca0cd9ed37e0a8ab67f2c61e83908a4c7d858ce3757f1.scope - libcontainer container 5c413c9a2fd3e6dc971ca0cd9ed37e0a8ab67f2c61e83908a4c7d858ce3757f1. May 9 04:57:10.264802 systemd[1]: Started cri-containerd-6d891709db64dd4eb1a72380e2326fb0ccffe03bf74e66cde18c8af4546c1036.scope - libcontainer container 6d891709db64dd4eb1a72380e2326fb0ccffe03bf74e66cde18c8af4546c1036. May 9 04:57:10.305128 containerd[1496]: time="2025-05-09T04:57:10.304799206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2ccfc2ee4d73dd967943644e1787751,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d891709db64dd4eb1a72380e2326fb0ccffe03bf74e66cde18c8af4546c1036\"" May 9 04:57:10.305128 containerd[1496]: time="2025-05-09T04:57:10.304910015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f173f8f2f19a007bf4c39e498586e31d2da872a37750fb2c1e305879b86a861\"" May 9 04:57:10.307650 containerd[1496]: time="2025-05-09T04:57:10.307613323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c413c9a2fd3e6dc971ca0cd9ed37e0a8ab67f2c61e83908a4c7d858ce3757f1\"" May 9 04:57:10.309707 containerd[1496]: time="2025-05-09T04:57:10.309505205Z" level=info msg="CreateContainer within sandbox \"4f173f8f2f19a007bf4c39e498586e31d2da872a37750fb2c1e305879b86a861\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 04:57:10.310442 containerd[1496]: time="2025-05-09T04:57:10.310408537Z" level=info msg="CreateContainer within sandbox \"6d891709db64dd4eb1a72380e2326fb0ccffe03bf74e66cde18c8af4546c1036\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 04:57:10.311089 containerd[1496]: time="2025-05-09T04:57:10.311064821Z" level=info msg="CreateContainer within sandbox \"5c413c9a2fd3e6dc971ca0cd9ed37e0a8ab67f2c61e83908a4c7d858ce3757f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 04:57:10.319600 containerd[1496]: time="2025-05-09T04:57:10.319545334Z" level=info msg="Container 0baf05878b4ea0df1b7457854f7ea8241e6c7d971b5967c4e382fa685a1a8210: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:10.322032 containerd[1496]: time="2025-05-09T04:57:10.321465650Z" level=info msg="Container 568b276388751cc7e8fa1d363780a29cadeff50e89014013d33214d80232a988: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:10.323987 containerd[1496]: time="2025-05-09T04:57:10.323936847Z" level=info msg="Container bb925a2c0fe99f50b9eeff20027c24423e94cbacb37e1a81c81e96acfbd1363b: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:10.328725 containerd[1496]: time="2025-05-09T04:57:10.328479135Z" level=info msg="CreateContainer within sandbox \"4f173f8f2f19a007bf4c39e498586e31d2da872a37750fb2c1e305879b86a861\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0baf05878b4ea0df1b7457854f7ea8241e6c7d971b5967c4e382fa685a1a8210\"" May 9 04:57:10.329347 containerd[1496]: time="2025-05-09T04:57:10.329320595Z" level=info msg="StartContainer for \"0baf05878b4ea0df1b7457854f7ea8241e6c7d971b5967c4e382fa685a1a8210\"" May 9 04:57:10.330673 containerd[1496]: time="2025-05-09T04:57:10.330291125Z" level=info msg="connecting to shim 0baf05878b4ea0df1b7457854f7ea8241e6c7d971b5967c4e382fa685a1a8210" address="unix:///run/containerd/s/64d5dc026e5506f225f49ddcc64cfc62e4a75ef0976c1765110ba084fba44768" protocol=ttrpc version=3 May 9 04:57:10.331465 containerd[1496]: time="2025-05-09T04:57:10.331434176Z" level=info msg="CreateContainer within sandbox \"5c413c9a2fd3e6dc971ca0cd9ed37e0a8ab67f2c61e83908a4c7d858ce3757f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"568b276388751cc7e8fa1d363780a29cadeff50e89014013d33214d80232a988\"" May 9 04:57:10.331867 containerd[1496]: time="2025-05-09T04:57:10.331844453Z" level=info msg="StartContainer for \"568b276388751cc7e8fa1d363780a29cadeff50e89014013d33214d80232a988\"" May 9 04:57:10.332326 containerd[1496]: time="2025-05-09T04:57:10.332293897Z" level=info msg="CreateContainer within sandbox \"6d891709db64dd4eb1a72380e2326fb0ccffe03bf74e66cde18c8af4546c1036\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb925a2c0fe99f50b9eeff20027c24423e94cbacb37e1a81c81e96acfbd1363b\"" May 9 04:57:10.332771 containerd[1496]: time="2025-05-09T04:57:10.332744541Z" level=info msg="StartContainer for \"bb925a2c0fe99f50b9eeff20027c24423e94cbacb37e1a81c81e96acfbd1363b\"" May 9 04:57:10.333094 containerd[1496]: time="2025-05-09T04:57:10.333067638Z" level=info msg="connecting to shim 568b276388751cc7e8fa1d363780a29cadeff50e89014013d33214d80232a988" address="unix:///run/containerd/s/a0ad7ced319bb71be84339063bb181895fdd400173f3ac4d06fcab45847169ea" protocol=ttrpc version=3 May 9 04:57:10.333681 containerd[1496]: time="2025-05-09T04:57:10.333653760Z" level=info msg="connecting to shim bb925a2c0fe99f50b9eeff20027c24423e94cbacb37e1a81c81e96acfbd1363b" address="unix:///run/containerd/s/dbd4f129bca9449cae4f75f3e0c152014e85ba64b69bf40354107c3d1010bdf7" protocol=ttrpc version=3 May 9 04:57:10.354339 systemd[1]: Started cri-containerd-0baf05878b4ea0df1b7457854f7ea8241e6c7d971b5967c4e382fa685a1a8210.scope - libcontainer container 0baf05878b4ea0df1b7457854f7ea8241e6c7d971b5967c4e382fa685a1a8210. May 9 04:57:10.355267 systemd[1]: Started cri-containerd-568b276388751cc7e8fa1d363780a29cadeff50e89014013d33214d80232a988.scope - libcontainer container 568b276388751cc7e8fa1d363780a29cadeff50e89014013d33214d80232a988. May 9 04:57:10.358465 systemd[1]: Started cri-containerd-bb925a2c0fe99f50b9eeff20027c24423e94cbacb37e1a81c81e96acfbd1363b.scope - libcontainer container bb925a2c0fe99f50b9eeff20027c24423e94cbacb37e1a81c81e96acfbd1363b. May 9 04:57:10.412096 containerd[1496]: time="2025-05-09T04:57:10.408281484Z" level=info msg="StartContainer for \"0baf05878b4ea0df1b7457854f7ea8241e6c7d971b5967c4e382fa685a1a8210\" returns successfully" May 9 04:57:10.418060 kubelet[2351]: W0509 04:57:10.417723 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:10.418060 kubelet[2351]: E0509 04:57:10.417799 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:10.439678 containerd[1496]: time="2025-05-09T04:57:10.439531507Z" level=info msg="StartContainer for \"568b276388751cc7e8fa1d363780a29cadeff50e89014013d33214d80232a988\" returns successfully" May 9 04:57:10.439678 containerd[1496]: time="2025-05-09T04:57:10.439647882Z" level=info msg="StartContainer for \"bb925a2c0fe99f50b9eeff20027c24423e94cbacb37e1a81c81e96acfbd1363b\" returns successfully" May 9 04:57:10.476561 kubelet[2351]: E0509 04:57:10.473307 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="1.6s" May 9 04:57:10.573896 kubelet[2351]: I0509 04:57:10.573860 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 04:57:10.574201 kubelet[2351]: E0509 04:57:10.574162 2351 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" May 9 04:57:10.599231 kubelet[2351]: W0509 04:57:10.597846 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:10.599231 kubelet[2351]: E0509 04:57:10.599236 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused May 9 04:57:12.061762 kubelet[2351]: I0509 04:57:12.061722 2351 apiserver.go:52] "Watching apiserver" May 9 04:57:12.069203 kubelet[2351]: I0509 04:57:12.067992 2351 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 04:57:12.076870 kubelet[2351]: E0509 04:57:12.076843 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 04:57:12.175514 kubelet[2351]: I0509 04:57:12.175489 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 04:57:12.180600 kubelet[2351]: I0509 04:57:12.180574 2351 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 04:57:13.798189 systemd[1]: Reload requested from client PID 2621 ('systemctl') (unit session-7.scope)... May 9 04:57:13.798212 systemd[1]: Reloading... May 9 04:57:13.869237 zram_generator::config[2664]: No configuration found. May 9 04:57:13.937464 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 04:57:14.034996 systemd[1]: Reloading finished in 236 ms. May 9 04:57:14.053343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:57:14.053575 kubelet[2351]: E0509 04:57:14.053256 2351 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.183dc2fe8eb9be36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 04:57:09.061639734 +0000 UTC m=+0.706987058,LastTimestamp:2025-05-09 04:57:09.061639734 +0000 UTC m=+0.706987058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 04:57:14.053950 kubelet[2351]: I0509 04:57:14.053862 2351 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 04:57:14.062616 systemd[1]: kubelet.service: Deactivated successfully. May 9 04:57:14.062821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:57:14.062863 systemd[1]: kubelet.service: Consumed 1.003s CPU time, 114.2M memory peak. May 9 04:57:14.064996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:57:14.195125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:57:14.207551 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 04:57:14.250925 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 04:57:14.250925 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 04:57:14.250925 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 04:57:14.250925 kubelet[2706]: I0509 04:57:14.248930 2706 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 04:57:14.254583 kubelet[2706]: I0509 04:57:14.254538 2706 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 04:57:14.254583 kubelet[2706]: I0509 04:57:14.254565 2706 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 04:57:14.254749 kubelet[2706]: I0509 04:57:14.254728 2706 server.go:927] "Client rotation is on, will bootstrap in background" May 9 04:57:14.256025 kubelet[2706]: I0509 04:57:14.256008 2706 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 04:57:14.257106 kubelet[2706]: I0509 04:57:14.257074 2706 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 04:57:14.266774 kubelet[2706]: I0509 04:57:14.266751 2706 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 04:57:14.266932 kubelet[2706]: I0509 04:57:14.266910 2706 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 04:57:14.267080 kubelet[2706]: I0509 04:57:14.266934 2706 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 04:57:14.267158 kubelet[2706]: I0509 04:57:14.267087 2706 topology_manager.go:138] "Creating topology manager with none policy" May 9 04:57:14.267158 kubelet[2706]: I0509 04:57:14.267095 2706 container_manager_linux.go:301] "Creating device plugin manager" May 9 04:57:14.267158 kubelet[2706]: I0509 04:57:14.267125 2706 state_mem.go:36] "Initialized new in-memory state store" May 9 04:57:14.267346 kubelet[2706]: I0509 04:57:14.267331 2706 kubelet.go:400] "Attempting to sync node with API server" May 9 04:57:14.267386 kubelet[2706]: I0509 04:57:14.267350 2706 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 04:57:14.267386 kubelet[2706]: I0509 04:57:14.267379 2706 kubelet.go:312] "Adding apiserver pod source" May 9 04:57:14.267436 kubelet[2706]: I0509 04:57:14.267392 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 04:57:14.269429 kubelet[2706]: I0509 04:57:14.268034 2706 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 9 04:57:14.269429 kubelet[2706]: I0509 04:57:14.268220 2706 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 04:57:14.269429 kubelet[2706]: I0509 04:57:14.268555 2706 server.go:1264] "Started kubelet" May 9 04:57:14.269881 kubelet[2706]: I0509 04:57:14.269845 2706 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 04:57:14.271149 kubelet[2706]: I0509 04:57:14.271129 2706 server.go:455] "Adding debug handlers to kubelet server" May 9 04:57:14.272340 kubelet[2706]: I0509 04:57:14.272297 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 04:57:14.272572 kubelet[2706]: I0509 04:57:14.272555 2706 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 04:57:14.273478 kubelet[2706]: I0509 04:57:14.273343 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 04:57:14.275087 kubelet[2706]: I0509 04:57:14.275068 2706 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 04:57:14.275359 kubelet[2706]: I0509 04:57:14.275343 2706 reconciler.go:26] "Reconciler: start to sync state" May 9 04:57:14.275434 kubelet[2706]: I0509 04:57:14.275424 2706 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 04:57:14.275937 kubelet[2706]: E0509 04:57:14.275914 2706 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 04:57:14.293847 kubelet[2706]: I0509 04:57:14.293810 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 04:57:14.294908 kubelet[2706]: I0509 04:57:14.294885 2706 factory.go:221] Registration of the systemd container factory successfully May 9 04:57:14.294955 kubelet[2706]: I0509 04:57:14.294918 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 04:57:14.294955 kubelet[2706]: I0509 04:57:14.294944 2706 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 04:57:14.294995 kubelet[2706]: I0509 04:57:14.294959 2706 kubelet.go:2337] "Starting kubelet main sync loop" May 9 04:57:14.294995 kubelet[2706]: I0509 04:57:14.294959 2706 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 04:57:14.295036 kubelet[2706]: E0509 04:57:14.295000 2706 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 04:57:14.297301 kubelet[2706]: I0509 04:57:14.297277 2706 factory.go:221] Registration of the containerd container factory successfully May 9 04:57:14.324503 kubelet[2706]: I0509 04:57:14.324419 2706 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 04:57:14.324503 kubelet[2706]: I0509 04:57:14.324440 2706 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 04:57:14.324503 kubelet[2706]: I0509 04:57:14.324460 2706 state_mem.go:36] "Initialized new in-memory state store" May 9 04:57:14.324649 kubelet[2706]: I0509 04:57:14.324581 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 04:57:14.324649 kubelet[2706]: I0509 04:57:14.324591 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 04:57:14.324649 kubelet[2706]: I0509 04:57:14.324607 2706 policy_none.go:49] "None policy: Start" May 9 04:57:14.325405 kubelet[2706]: I0509 04:57:14.325371 2706 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 04:57:14.325405 kubelet[2706]: I0509 04:57:14.325397 2706 state_mem.go:35] "Initializing new in-memory state store" May 9 04:57:14.325546 kubelet[2706]: I0509 04:57:14.325529 2706 state_mem.go:75] "Updated machine memory state" May 9 04:57:14.331168 kubelet[2706]: I0509 04:57:14.331122 2706 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 04:57:14.331460 kubelet[2706]: I0509 04:57:14.331319 2706 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 04:57:14.331460 kubelet[2706]: I0509 04:57:14.331422 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 04:57:14.375349 kubelet[2706]: I0509 04:57:14.375319 2706 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 04:57:14.382018 kubelet[2706]: I0509 04:57:14.381344 2706 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 9 04:57:14.382018 kubelet[2706]: I0509 04:57:14.381410 2706 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 04:57:14.395365 kubelet[2706]: I0509 04:57:14.395330 2706 topology_manager.go:215] "Topology Admit Handler" podUID="c2ccfc2ee4d73dd967943644e1787751" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 04:57:14.395449 kubelet[2706]: I0509 04:57:14.395426 2706 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 04:57:14.395475 kubelet[2706]: I0509 04:57:14.395465 2706 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 04:57:14.401162 kubelet[2706]: E0509 04:57:14.400467 2706 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 9 04:57:14.577314 kubelet[2706]: I0509 04:57:14.577190 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2ccfc2ee4d73dd967943644e1787751-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2ccfc2ee4d73dd967943644e1787751\") " pod="kube-system/kube-apiserver-localhost" May 9 04:57:14.577314 kubelet[2706]: I0509 04:57:14.577244 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:14.577314 kubelet[2706]: I0509 04:57:14.577263 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:14.577314 kubelet[2706]: I0509 04:57:14.577280 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:14.577314 kubelet[2706]: I0509 04:57:14.577298 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 04:57:14.577473 kubelet[2706]: I0509 04:57:14.577311 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2ccfc2ee4d73dd967943644e1787751-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2ccfc2ee4d73dd967943644e1787751\") " pod="kube-system/kube-apiserver-localhost" May 9 04:57:14.577473 kubelet[2706]: I0509 04:57:14.577326 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2ccfc2ee4d73dd967943644e1787751-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2ccfc2ee4d73dd967943644e1787751\") " pod="kube-system/kube-apiserver-localhost" May 9 04:57:14.577473 kubelet[2706]: I0509 04:57:14.577342 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:14.577473 kubelet[2706]: I0509 04:57:14.577357 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:57:15.268358 kubelet[2706]: I0509 04:57:15.268316 2706 apiserver.go:52] "Watching apiserver" May 9 04:57:15.275779 kubelet[2706]: I0509 04:57:15.275736 2706 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 04:57:15.315642 kubelet[2706]: E0509 04:57:15.315261 2706 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 9 04:57:15.328212 kubelet[2706]: I0509 04:57:15.328075 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.327432326 podStartE2EDuration="1.327432326s" podCreationTimestamp="2025-05-09 04:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:57:15.327152999 +0000 UTC m=+1.115768906" watchObservedRunningTime="2025-05-09 04:57:15.327432326 +0000 UTC m=+1.116048193" May 9 04:57:15.342748 kubelet[2706]: I0509 04:57:15.342690 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.342676674 podStartE2EDuration="2.342676674s" podCreationTimestamp="2025-05-09 04:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:57:15.334558544 +0000 UTC m=+1.123174411" watchObservedRunningTime="2025-05-09 04:57:15.342676674 +0000 UTC m=+1.131292501" May 9 04:57:15.349516 kubelet[2706]: I0509 04:57:15.349457 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.349443717 podStartE2EDuration="1.349443717s" podCreationTimestamp="2025-05-09 04:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:57:15.34312078 +0000 UTC m=+1.131736647" watchObservedRunningTime="2025-05-09 04:57:15.349443717 +0000 UTC m=+1.138059585" May 9 04:57:19.202684 sudo[1686]: pam_unix(sudo:session): session closed for user root May 9 04:57:19.204303 sshd[1685]: Connection closed by 10.0.0.1 port 39568 May 9 04:57:19.206647 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 9 04:57:19.209500 systemd[1]: sshd@6-10.0.0.63:22-10.0.0.1:39568.service: Deactivated successfully. May 9 04:57:19.212776 systemd[1]: session-7.scope: Deactivated successfully. May 9 04:57:19.212944 systemd[1]: session-7.scope: Consumed 6.438s CPU time, 239.1M memory peak. May 9 04:57:19.214850 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. May 9 04:57:19.216043 systemd-logind[1470]: Removed session 7. May 9 04:57:27.401777 kubelet[2706]: I0509 04:57:27.401735 2706 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 04:57:27.409008 containerd[1496]: time="2025-05-09T04:57:27.407360664Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 04:57:27.410043 kubelet[2706]: I0509 04:57:27.409495 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 04:57:28.381180 kubelet[2706]: I0509 04:57:28.381123 2706 topology_manager.go:215] "Topology Admit Handler" podUID="dc7173da-ec4f-40da-8f44-33ed5aee0bfa" podNamespace="kube-system" podName="kube-proxy-wb9ht" May 9 04:57:28.395317 systemd[1]: Created slice kubepods-besteffort-poddc7173da_ec4f_40da_8f44_33ed5aee0bfa.slice - libcontainer container kubepods-besteffort-poddc7173da_ec4f_40da_8f44_33ed5aee0bfa.slice. May 9 04:57:28.481260 kubelet[2706]: I0509 04:57:28.481217 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc7173da-ec4f-40da-8f44-33ed5aee0bfa-kube-proxy\") pod \"kube-proxy-wb9ht\" (UID: \"dc7173da-ec4f-40da-8f44-33ed5aee0bfa\") " pod="kube-system/kube-proxy-wb9ht" May 9 04:57:28.481260 kubelet[2706]: I0509 04:57:28.481264 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pb2\" (UniqueName: \"kubernetes.io/projected/dc7173da-ec4f-40da-8f44-33ed5aee0bfa-kube-api-access-d4pb2\") pod \"kube-proxy-wb9ht\" (UID: \"dc7173da-ec4f-40da-8f44-33ed5aee0bfa\") " pod="kube-system/kube-proxy-wb9ht" May 9 04:57:28.481736 kubelet[2706]: I0509 04:57:28.481297 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc7173da-ec4f-40da-8f44-33ed5aee0bfa-xtables-lock\") pod \"kube-proxy-wb9ht\" (UID: \"dc7173da-ec4f-40da-8f44-33ed5aee0bfa\") " pod="kube-system/kube-proxy-wb9ht" May 9 04:57:28.481736 kubelet[2706]: I0509 04:57:28.481315 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc7173da-ec4f-40da-8f44-33ed5aee0bfa-lib-modules\") pod \"kube-proxy-wb9ht\" (UID: \"dc7173da-ec4f-40da-8f44-33ed5aee0bfa\") " pod="kube-system/kube-proxy-wb9ht" May 9 04:57:28.541518 kubelet[2706]: I0509 04:57:28.541062 2706 topology_manager.go:215] "Topology Admit Handler" podUID="c65857dc-b0b9-4309-b6ed-90e80b47d7ab" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-qfsh8" May 9 04:57:28.547999 systemd[1]: Created slice kubepods-besteffort-podc65857dc_b0b9_4309_b6ed_90e80b47d7ab.slice - libcontainer container kubepods-besteffort-podc65857dc_b0b9_4309_b6ed_90e80b47d7ab.slice. May 9 04:57:28.582156 kubelet[2706]: I0509 04:57:28.582093 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c65857dc-b0b9-4309-b6ed-90e80b47d7ab-var-lib-calico\") pod \"tigera-operator-797db67f8-qfsh8\" (UID: \"c65857dc-b0b9-4309-b6ed-90e80b47d7ab\") " pod="tigera-operator/tigera-operator-797db67f8-qfsh8" May 9 04:57:28.582639 kubelet[2706]: I0509 04:57:28.582553 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5wtn\" (UniqueName: \"kubernetes.io/projected/c65857dc-b0b9-4309-b6ed-90e80b47d7ab-kube-api-access-v5wtn\") pod \"tigera-operator-797db67f8-qfsh8\" (UID: \"c65857dc-b0b9-4309-b6ed-90e80b47d7ab\") " pod="tigera-operator/tigera-operator-797db67f8-qfsh8" May 9 04:57:28.712268 containerd[1496]: time="2025-05-09T04:57:28.712222868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wb9ht,Uid:dc7173da-ec4f-40da-8f44-33ed5aee0bfa,Namespace:kube-system,Attempt:0,}" May 9 04:57:28.748709 containerd[1496]: time="2025-05-09T04:57:28.748663268Z" level=info msg="connecting to shim a9766ebbc8b3aa24589754377e51c58b8d2c954d8ba88741afc08fefd90164b5" address="unix:///run/containerd/s/9671c1e461d57c99b28180da7f11f3f42911aa51051140cce2c80231eaef6844" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:28.792350 systemd[1]: Started cri-containerd-a9766ebbc8b3aa24589754377e51c58b8d2c954d8ba88741afc08fefd90164b5.scope - libcontainer container a9766ebbc8b3aa24589754377e51c58b8d2c954d8ba88741afc08fefd90164b5. May 9 04:57:28.822486 containerd[1496]: time="2025-05-09T04:57:28.822438810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wb9ht,Uid:dc7173da-ec4f-40da-8f44-33ed5aee0bfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9766ebbc8b3aa24589754377e51c58b8d2c954d8ba88741afc08fefd90164b5\"" May 9 04:57:28.836598 containerd[1496]: time="2025-05-09T04:57:28.836152452Z" level=info msg="CreateContainer within sandbox \"a9766ebbc8b3aa24589754377e51c58b8d2c954d8ba88741afc08fefd90164b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 04:57:28.847235 containerd[1496]: time="2025-05-09T04:57:28.846767486Z" level=info msg="Container 8bb3f6a94fc4d30a0a18fbe5e916ca50b81f695cb09460397273911c122f42fe: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:28.852256 containerd[1496]: time="2025-05-09T04:57:28.852218798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-qfsh8,Uid:c65857dc-b0b9-4309-b6ed-90e80b47d7ab,Namespace:tigera-operator,Attempt:0,}" May 9 04:57:28.855620 containerd[1496]: time="2025-05-09T04:57:28.855382583Z" level=info msg="CreateContainer within sandbox \"a9766ebbc8b3aa24589754377e51c58b8d2c954d8ba88741afc08fefd90164b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8bb3f6a94fc4d30a0a18fbe5e916ca50b81f695cb09460397273911c122f42fe\"" May 9 04:57:28.856183 containerd[1496]: time="2025-05-09T04:57:28.856053829Z" level=info msg="StartContainer for \"8bb3f6a94fc4d30a0a18fbe5e916ca50b81f695cb09460397273911c122f42fe\"" May 9 04:57:28.857494 containerd[1496]: time="2025-05-09T04:57:28.857447335Z" level=info msg="connecting to shim 8bb3f6a94fc4d30a0a18fbe5e916ca50b81f695cb09460397273911c122f42fe" address="unix:///run/containerd/s/9671c1e461d57c99b28180da7f11f3f42911aa51051140cce2c80231eaef6844" protocol=ttrpc version=3 May 9 04:57:28.876300 containerd[1496]: time="2025-05-09T04:57:28.875812571Z" level=info msg="connecting to shim cbec78b157f51f00d0db95bc591207a8c418a1d373f54991c69b12a83f87819d" address="unix:///run/containerd/s/f8c499be8c46e526af66cffff76131819fa65c21e6466224bc632eb4b00f9a39" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:28.880365 systemd[1]: Started cri-containerd-8bb3f6a94fc4d30a0a18fbe5e916ca50b81f695cb09460397273911c122f42fe.scope - libcontainer container 8bb3f6a94fc4d30a0a18fbe5e916ca50b81f695cb09460397273911c122f42fe. May 9 04:57:28.907428 systemd[1]: Started cri-containerd-cbec78b157f51f00d0db95bc591207a8c418a1d373f54991c69b12a83f87819d.scope - libcontainer container cbec78b157f51f00d0db95bc591207a8c418a1d373f54991c69b12a83f87819d. May 9 04:57:28.946649 containerd[1496]: time="2025-05-09T04:57:28.946593090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-qfsh8,Uid:c65857dc-b0b9-4309-b6ed-90e80b47d7ab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cbec78b157f51f00d0db95bc591207a8c418a1d373f54991c69b12a83f87819d\"" May 9 04:57:28.959253 containerd[1496]: time="2025-05-09T04:57:28.959208339Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 9 04:57:28.980609 containerd[1496]: time="2025-05-09T04:57:28.980523227Z" level=info msg="StartContainer for \"8bb3f6a94fc4d30a0a18fbe5e916ca50b81f695cb09460397273911c122f42fe\" returns successfully" May 9 04:57:29.602781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37501393.mount: Deactivated successfully. May 9 04:57:30.312397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3150491131.mount: Deactivated successfully. May 9 04:57:30.478562 update_engine[1474]: I20250509 04:57:30.478493 1474 update_attempter.cc:509] Updating boot flags... May 9 04:57:30.504439 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2997) May 9 04:57:30.579284 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2996) May 9 04:57:30.766120 containerd[1496]: time="2025-05-09T04:57:30.766065067Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:30.766826 containerd[1496]: time="2025-05-09T04:57:30.766761663Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 9 04:57:30.767482 containerd[1496]: time="2025-05-09T04:57:30.767456779Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:30.769435 containerd[1496]: time="2025-05-09T04:57:30.769233096Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:30.770685 containerd[1496]: time="2025-05-09T04:57:30.770658255Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.811407026s" May 9 04:57:30.770766 containerd[1496]: time="2025-05-09T04:57:30.770688782Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 9 04:57:30.773908 containerd[1496]: time="2025-05-09T04:57:30.773861612Z" level=info msg="CreateContainer within sandbox \"cbec78b157f51f00d0db95bc591207a8c418a1d373f54991c69b12a83f87819d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 9 04:57:30.781863 containerd[1496]: time="2025-05-09T04:57:30.781047141Z" level=info msg="Container ce4f1856b1678c1beed69a649f426ea2044561b623a1b77e10594430b9639ea4: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:30.787858 containerd[1496]: time="2025-05-09T04:57:30.787806413Z" level=info msg="CreateContainer within sandbox \"cbec78b157f51f00d0db95bc591207a8c418a1d373f54991c69b12a83f87819d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ce4f1856b1678c1beed69a649f426ea2044561b623a1b77e10594430b9639ea4\"" May 9 04:57:30.788340 containerd[1496]: time="2025-05-09T04:57:30.788311407Z" level=info msg="StartContainer for \"ce4f1856b1678c1beed69a649f426ea2044561b623a1b77e10594430b9639ea4\"" May 9 04:57:30.789077 containerd[1496]: time="2025-05-09T04:57:30.789053893Z" level=info msg="connecting to shim ce4f1856b1678c1beed69a649f426ea2044561b623a1b77e10594430b9639ea4" address="unix:///run/containerd/s/f8c499be8c46e526af66cffff76131819fa65c21e6466224bc632eb4b00f9a39" protocol=ttrpc version=3 May 9 04:57:30.810395 systemd[1]: Started cri-containerd-ce4f1856b1678c1beed69a649f426ea2044561b623a1b77e10594430b9639ea4.scope - libcontainer container ce4f1856b1678c1beed69a649f426ea2044561b623a1b77e10594430b9639ea4. May 9 04:57:30.835688 containerd[1496]: time="2025-05-09T04:57:30.835590949Z" level=info msg="StartContainer for \"ce4f1856b1678c1beed69a649f426ea2044561b623a1b77e10594430b9639ea4\" returns successfully" May 9 04:57:31.354923 kubelet[2706]: I0509 04:57:31.354862 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wb9ht" podStartSLOduration=3.354842926 podStartE2EDuration="3.354842926s" podCreationTimestamp="2025-05-09 04:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:57:29.357600549 +0000 UTC m=+15.146216416" watchObservedRunningTime="2025-05-09 04:57:31.354842926 +0000 UTC m=+17.143458753" May 9 04:57:31.355464 kubelet[2706]: I0509 04:57:31.354963 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-qfsh8" podStartSLOduration=1.538681248 podStartE2EDuration="3.354959231s" podCreationTimestamp="2025-05-09 04:57:28 +0000 UTC" firstStartedPulling="2025-05-09 04:57:28.956490985 +0000 UTC m=+14.745106852" lastFinishedPulling="2025-05-09 04:57:30.772768968 +0000 UTC m=+16.561384835" observedRunningTime="2025-05-09 04:57:31.354700896 +0000 UTC m=+17.143316763" watchObservedRunningTime="2025-05-09 04:57:31.354959231 +0000 UTC m=+17.143575098" May 9 04:57:34.416696 kubelet[2706]: I0509 04:57:34.416633 2706 topology_manager.go:215] "Topology Admit Handler" podUID="3b8340e1-efac-417c-94a3-b5f454d5ad70" podNamespace="calico-system" podName="calico-typha-6d946ffd47-4ltjg" May 9 04:57:34.431316 systemd[1]: Created slice kubepods-besteffort-pod3b8340e1_efac_417c_94a3_b5f454d5ad70.slice - libcontainer container kubepods-besteffort-pod3b8340e1_efac_417c_94a3_b5f454d5ad70.slice. May 9 04:57:34.464435 kubelet[2706]: I0509 04:57:34.464324 2706 topology_manager.go:215] "Topology Admit Handler" podUID="b71bb900-42a6-447a-b661-b3f8deb4d470" podNamespace="calico-system" podName="calico-node-g6dpj" May 9 04:57:34.477105 systemd[1]: Created slice kubepods-besteffort-podb71bb900_42a6_447a_b661_b3f8deb4d470.slice - libcontainer container kubepods-besteffort-podb71bb900_42a6_447a_b661_b3f8deb4d470.slice. May 9 04:57:34.529347 kubelet[2706]: I0509 04:57:34.529283 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b71bb900-42a6-447a-b661-b3f8deb4d470-tigera-ca-bundle\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529573 kubelet[2706]: I0509 04:57:34.529366 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b8340e1-efac-417c-94a3-b5f454d5ad70-tigera-ca-bundle\") pod \"calico-typha-6d946ffd47-4ltjg\" (UID: \"3b8340e1-efac-417c-94a3-b5f454d5ad70\") " pod="calico-system/calico-typha-6d946ffd47-4ltjg" May 9 04:57:34.529573 kubelet[2706]: I0509 04:57:34.529409 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-var-lib-calico\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529573 kubelet[2706]: I0509 04:57:34.529429 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-cni-bin-dir\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529573 kubelet[2706]: I0509 04:57:34.529447 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7cn2\" (UniqueName: \"kubernetes.io/projected/3b8340e1-efac-417c-94a3-b5f454d5ad70-kube-api-access-j7cn2\") pod \"calico-typha-6d946ffd47-4ltjg\" (UID: \"3b8340e1-efac-417c-94a3-b5f454d5ad70\") " pod="calico-system/calico-typha-6d946ffd47-4ltjg" May 9 04:57:34.529573 kubelet[2706]: I0509 04:57:34.529485 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-flexvol-driver-host\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529701 kubelet[2706]: I0509 04:57:34.529551 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-cni-log-dir\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529701 kubelet[2706]: I0509 04:57:34.529589 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b71bb900-42a6-447a-b661-b3f8deb4d470-node-certs\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529701 kubelet[2706]: I0509 04:57:34.529608 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-lib-modules\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529701 kubelet[2706]: I0509 04:57:34.529626 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-var-run-calico\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529701 kubelet[2706]: I0509 04:57:34.529642 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9sjv\" (UniqueName: \"kubernetes.io/projected/b71bb900-42a6-447a-b661-b3f8deb4d470-kube-api-access-n9sjv\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529811 kubelet[2706]: I0509 04:57:34.529662 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-xtables-lock\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529811 kubelet[2706]: I0509 04:57:34.529678 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-policysync\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529811 kubelet[2706]: I0509 04:57:34.529703 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b71bb900-42a6-447a-b661-b3f8deb4d470-cni-net-dir\") pod \"calico-node-g6dpj\" (UID: \"b71bb900-42a6-447a-b661-b3f8deb4d470\") " pod="calico-system/calico-node-g6dpj" May 9 04:57:34.529811 kubelet[2706]: I0509 04:57:34.529726 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3b8340e1-efac-417c-94a3-b5f454d5ad70-typha-certs\") pod \"calico-typha-6d946ffd47-4ltjg\" (UID: \"3b8340e1-efac-417c-94a3-b5f454d5ad70\") " pod="calico-system/calico-typha-6d946ffd47-4ltjg" May 9 04:57:34.582872 kubelet[2706]: I0509 04:57:34.582830 2706 topology_manager.go:215] "Topology Admit Handler" podUID="7d231f9b-cbef-416a-93ac-f825fa0ec566" podNamespace="calico-system" podName="csi-node-driver-bbccr" May 9 04:57:34.583129 kubelet[2706]: E0509 04:57:34.583106 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbccr" podUID="7d231f9b-cbef-416a-93ac-f825fa0ec566" May 9 04:57:34.630513 kubelet[2706]: I0509 04:57:34.630453 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7d231f9b-cbef-416a-93ac-f825fa0ec566-varrun\") pod \"csi-node-driver-bbccr\" (UID: \"7d231f9b-cbef-416a-93ac-f825fa0ec566\") " pod="calico-system/csi-node-driver-bbccr" May 9 04:57:34.630513 kubelet[2706]: I0509 04:57:34.630523 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d231f9b-cbef-416a-93ac-f825fa0ec566-kubelet-dir\") pod \"csi-node-driver-bbccr\" (UID: \"7d231f9b-cbef-416a-93ac-f825fa0ec566\") " pod="calico-system/csi-node-driver-bbccr" May 9 04:57:34.630675 kubelet[2706]: I0509 04:57:34.630580 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7d231f9b-cbef-416a-93ac-f825fa0ec566-registration-dir\") pod \"csi-node-driver-bbccr\" (UID: \"7d231f9b-cbef-416a-93ac-f825fa0ec566\") " pod="calico-system/csi-node-driver-bbccr" May 9 04:57:34.630675 kubelet[2706]: I0509 04:57:34.630609 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7d231f9b-cbef-416a-93ac-f825fa0ec566-socket-dir\") pod \"csi-node-driver-bbccr\" (UID: \"7d231f9b-cbef-416a-93ac-f825fa0ec566\") " pod="calico-system/csi-node-driver-bbccr" May 9 04:57:34.630675 kubelet[2706]: I0509 04:57:34.630626 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq5l4\" (UniqueName: \"kubernetes.io/projected/7d231f9b-cbef-416a-93ac-f825fa0ec566-kube-api-access-rq5l4\") pod \"csi-node-driver-bbccr\" (UID: \"7d231f9b-cbef-416a-93ac-f825fa0ec566\") " pod="calico-system/csi-node-driver-bbccr" May 9 04:57:34.634442 kubelet[2706]: E0509 04:57:34.634419 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.634598 kubelet[2706]: W0509 04:57:34.634534 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.634598 kubelet[2706]: E0509 04:57:34.634558 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.638631 kubelet[2706]: E0509 04:57:34.638565 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.638631 kubelet[2706]: W0509 04:57:34.638579 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.638631 kubelet[2706]: E0509 04:57:34.638592 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.640697 kubelet[2706]: E0509 04:57:34.640676 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.640697 kubelet[2706]: W0509 04:57:34.640692 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.640784 kubelet[2706]: E0509 04:57:34.640712 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.641028 kubelet[2706]: E0509 04:57:34.640867 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.641028 kubelet[2706]: W0509 04:57:34.640884 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.641028 kubelet[2706]: E0509 04:57:34.640900 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.641115 kubelet[2706]: E0509 04:57:34.641045 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.641115 kubelet[2706]: W0509 04:57:34.641058 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.641115 kubelet[2706]: E0509 04:57:34.641071 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.731755 kubelet[2706]: E0509 04:57:34.731557 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.731755 kubelet[2706]: W0509 04:57:34.731575 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.731755 kubelet[2706]: E0509 04:57:34.731588 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.731927 kubelet[2706]: E0509 04:57:34.731849 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.731927 kubelet[2706]: W0509 04:57:34.731857 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.731927 kubelet[2706]: E0509 04:57:34.731867 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.732796 kubelet[2706]: E0509 04:57:34.732042 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.732796 kubelet[2706]: W0509 04:57:34.732058 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.732796 kubelet[2706]: E0509 04:57:34.732068 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.732796 kubelet[2706]: E0509 04:57:34.732248 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.732796 kubelet[2706]: W0509 04:57:34.732256 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.732796 kubelet[2706]: E0509 04:57:34.732265 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.732796 kubelet[2706]: E0509 04:57:34.732476 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.732796 kubelet[2706]: W0509 04:57:34.732496 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.732796 kubelet[2706]: E0509 04:57:34.732614 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.733030 kubelet[2706]: E0509 04:57:34.732846 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.733030 kubelet[2706]: W0509 04:57:34.732857 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.733030 kubelet[2706]: E0509 04:57:34.732879 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.733189 kubelet[2706]: E0509 04:57:34.733173 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.733244 kubelet[2706]: W0509 04:57:34.733189 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.733372 kubelet[2706]: E0509 04:57:34.733294 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.733513 kubelet[2706]: E0509 04:57:34.733466 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.733513 kubelet[2706]: W0509 04:57:34.733483 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.733576 kubelet[2706]: E0509 04:57:34.733514 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.733720 kubelet[2706]: E0509 04:57:34.733675 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.733788 kubelet[2706]: W0509 04:57:34.733770 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.733788 kubelet[2706]: E0509 04:57:34.733812 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.733950 kubelet[2706]: E0509 04:57:34.733935 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.733950 kubelet[2706]: W0509 04:57:34.733946 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.734003 kubelet[2706]: E0509 04:57:34.733987 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.734157 kubelet[2706]: E0509 04:57:34.734130 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.734157 kubelet[2706]: W0509 04:57:34.734141 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.734261 kubelet[2706]: E0509 04:57:34.734183 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.734284 kubelet[2706]: E0509 04:57:34.734278 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.734329 kubelet[2706]: W0509 04:57:34.734287 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.734329 kubelet[2706]: E0509 04:57:34.734302 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.734456 kubelet[2706]: E0509 04:57:34.734444 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.734456 kubelet[2706]: W0509 04:57:34.734454 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.734506 kubelet[2706]: E0509 04:57:34.734467 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.734650 kubelet[2706]: E0509 04:57:34.734638 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.734650 kubelet[2706]: W0509 04:57:34.734649 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.734704 kubelet[2706]: E0509 04:57:34.734662 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.734875 kubelet[2706]: E0509 04:57:34.734860 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.734875 kubelet[2706]: W0509 04:57:34.734871 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.734930 kubelet[2706]: E0509 04:57:34.734884 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.735044 kubelet[2706]: E0509 04:57:34.735032 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.735044 kubelet[2706]: W0509 04:57:34.735043 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.735102 kubelet[2706]: E0509 04:57:34.735055 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.735230 kubelet[2706]: E0509 04:57:34.735184 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.735230 kubelet[2706]: W0509 04:57:34.735208 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.735295 kubelet[2706]: E0509 04:57:34.735252 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.735362 kubelet[2706]: E0509 04:57:34.735350 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.735362 kubelet[2706]: W0509 04:57:34.735360 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.735428 kubelet[2706]: E0509 04:57:34.735407 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.735520 kubelet[2706]: E0509 04:57:34.735509 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.735520 kubelet[2706]: W0509 04:57:34.735518 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.735571 kubelet[2706]: E0509 04:57:34.735556 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.735662 kubelet[2706]: E0509 04:57:34.735651 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.735662 kubelet[2706]: W0509 04:57:34.735660 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.735716 kubelet[2706]: E0509 04:57:34.735672 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.735868 kubelet[2706]: E0509 04:57:34.735855 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.735868 kubelet[2706]: W0509 04:57:34.735866 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.735927 kubelet[2706]: E0509 04:57:34.735878 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.736036 kubelet[2706]: E0509 04:57:34.736025 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.736036 kubelet[2706]: W0509 04:57:34.736035 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.736086 kubelet[2706]: E0509 04:57:34.736048 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.736262 kubelet[2706]: E0509 04:57:34.736250 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.736290 kubelet[2706]: W0509 04:57:34.736263 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.736290 kubelet[2706]: E0509 04:57:34.736281 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.736470 kubelet[2706]: E0509 04:57:34.736457 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.736470 kubelet[2706]: W0509 04:57:34.736468 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.736531 kubelet[2706]: E0509 04:57:34.736477 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.736706 kubelet[2706]: E0509 04:57:34.736694 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.736706 kubelet[2706]: W0509 04:57:34.736706 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.736753 kubelet[2706]: E0509 04:57:34.736715 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.737162 containerd[1496]: time="2025-05-09T04:57:34.737120467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d946ffd47-4ltjg,Uid:3b8340e1-efac-417c-94a3-b5f454d5ad70,Namespace:calico-system,Attempt:0,}" May 9 04:57:34.752633 kubelet[2706]: E0509 04:57:34.751787 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:34.752633 kubelet[2706]: W0509 04:57:34.751816 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:34.752633 kubelet[2706]: E0509 04:57:34.751833 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:34.781617 containerd[1496]: time="2025-05-09T04:57:34.781566714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g6dpj,Uid:b71bb900-42a6-447a-b661-b3f8deb4d470,Namespace:calico-system,Attempt:0,}" May 9 04:57:34.790125 containerd[1496]: time="2025-05-09T04:57:34.790084800Z" level=info msg="connecting to shim 7e2ce89702f8a5a3f3c5468e4f81fa6fe7a08bf7edf898415a71bc46dab0a191" address="unix:///run/containerd/s/acc886e1eca9c7b65c506fed85aa05dae2fb9be9bb50451e591e42367d7d1fc6" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:34.806429 containerd[1496]: time="2025-05-09T04:57:34.806348108Z" level=info msg="connecting to shim 182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3" address="unix:///run/containerd/s/dc2ea9b5db63a8aa65ea3a2f3769189b1d1c36eb12d7caa8b08f46551570df9e" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:34.816370 systemd[1]: Started cri-containerd-7e2ce89702f8a5a3f3c5468e4f81fa6fe7a08bf7edf898415a71bc46dab0a191.scope - libcontainer container 7e2ce89702f8a5a3f3c5468e4f81fa6fe7a08bf7edf898415a71bc46dab0a191. May 9 04:57:34.824136 systemd[1]: Started cri-containerd-182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3.scope - libcontainer container 182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3. May 9 04:57:34.852904 containerd[1496]: time="2025-05-09T04:57:34.852865617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d946ffd47-4ltjg,Uid:3b8340e1-efac-417c-94a3-b5f454d5ad70,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e2ce89702f8a5a3f3c5468e4f81fa6fe7a08bf7edf898415a71bc46dab0a191\"" May 9 04:57:34.855218 containerd[1496]: time="2025-05-09T04:57:34.855135874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 9 04:57:34.856722 containerd[1496]: time="2025-05-09T04:57:34.856617906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g6dpj,Uid:b71bb900-42a6-447a-b661-b3f8deb4d470,Namespace:calico-system,Attempt:0,} returns sandbox id \"182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3\"" May 9 04:57:36.296210 kubelet[2706]: E0509 04:57:36.296146 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbccr" podUID="7d231f9b-cbef-416a-93ac-f825fa0ec566" May 9 04:57:36.681678 containerd[1496]: time="2025-05-09T04:57:36.681633184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:36.682499 containerd[1496]: time="2025-05-09T04:57:36.682396312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 9 04:57:36.683605 containerd[1496]: time="2025-05-09T04:57:36.683146517Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:36.685459 containerd[1496]: time="2025-05-09T04:57:36.685432379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:36.685988 containerd[1496]: time="2025-05-09T04:57:36.685954907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.830788548s" May 9 04:57:36.685988 containerd[1496]: time="2025-05-09T04:57:36.685986392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 9 04:57:36.686957 containerd[1496]: time="2025-05-09T04:57:36.686927870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 04:57:36.695172 containerd[1496]: time="2025-05-09T04:57:36.695097877Z" level=info msg="CreateContainer within sandbox \"7e2ce89702f8a5a3f3c5468e4f81fa6fe7a08bf7edf898415a71bc46dab0a191\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 9 04:57:36.702468 containerd[1496]: time="2025-05-09T04:57:36.702426383Z" level=info msg="Container 99614c2624eefe624b9dbdc98aaef6604b38ef8ad1260e5e129d2f5633639135: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:36.705044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714719584.mount: Deactivated successfully. May 9 04:57:36.709647 containerd[1496]: time="2025-05-09T04:57:36.709610464Z" level=info msg="CreateContainer within sandbox \"7e2ce89702f8a5a3f3c5468e4f81fa6fe7a08bf7edf898415a71bc46dab0a191\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"99614c2624eefe624b9dbdc98aaef6604b38ef8ad1260e5e129d2f5633639135\"" May 9 04:57:36.710077 containerd[1496]: time="2025-05-09T04:57:36.710045257Z" level=info msg="StartContainer for \"99614c2624eefe624b9dbdc98aaef6604b38ef8ad1260e5e129d2f5633639135\"" May 9 04:57:36.711309 containerd[1496]: time="2025-05-09T04:57:36.711115036Z" level=info msg="connecting to shim 99614c2624eefe624b9dbdc98aaef6604b38ef8ad1260e5e129d2f5633639135" address="unix:///run/containerd/s/acc886e1eca9c7b65c506fed85aa05dae2fb9be9bb50451e591e42367d7d1fc6" protocol=ttrpc version=3 May 9 04:57:36.731360 systemd[1]: Started cri-containerd-99614c2624eefe624b9dbdc98aaef6604b38ef8ad1260e5e129d2f5633639135.scope - libcontainer container 99614c2624eefe624b9dbdc98aaef6604b38ef8ad1260e5e129d2f5633639135. May 9 04:57:36.765872 containerd[1496]: time="2025-05-09T04:57:36.765085625Z" level=info msg="StartContainer for \"99614c2624eefe624b9dbdc98aaef6604b38ef8ad1260e5e129d2f5633639135\" returns successfully" May 9 04:57:37.372871 kubelet[2706]: I0509 04:57:37.372519 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d946ffd47-4ltjg" podStartSLOduration=1.540830928 podStartE2EDuration="3.372502147s" podCreationTimestamp="2025-05-09 04:57:34 +0000 UTC" firstStartedPulling="2025-05-09 04:57:34.854933437 +0000 UTC m=+20.643549304" lastFinishedPulling="2025-05-09 04:57:36.686604656 +0000 UTC m=+22.475220523" observedRunningTime="2025-05-09 04:57:37.372435376 +0000 UTC m=+23.161051243" watchObservedRunningTime="2025-05-09 04:57:37.372502147 +0000 UTC m=+23.161117974" May 9 04:57:37.436171 kubelet[2706]: E0509 04:57:37.436127 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.436171 kubelet[2706]: W0509 04:57:37.436152 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.436171 kubelet[2706]: E0509 04:57:37.436171 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.436425 kubelet[2706]: E0509 04:57:37.436399 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.436425 kubelet[2706]: W0509 04:57:37.436411 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.436425 kubelet[2706]: E0509 04:57:37.436421 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.436640 kubelet[2706]: E0509 04:57:37.436614 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.436640 kubelet[2706]: W0509 04:57:37.436626 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.436640 kubelet[2706]: E0509 04:57:37.436634 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.436798 kubelet[2706]: E0509 04:57:37.436775 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.436798 kubelet[2706]: W0509 04:57:37.436786 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.436798 kubelet[2706]: E0509 04:57:37.436795 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.437035 kubelet[2706]: E0509 04:57:37.437003 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.437035 kubelet[2706]: W0509 04:57:37.437017 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.437035 kubelet[2706]: E0509 04:57:37.437026 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.437179 kubelet[2706]: E0509 04:57:37.437167 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.437179 kubelet[2706]: W0509 04:57:37.437177 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.437259 kubelet[2706]: E0509 04:57:37.437185 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.437346 kubelet[2706]: E0509 04:57:37.437334 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.437346 kubelet[2706]: W0509 04:57:37.437344 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.437389 kubelet[2706]: E0509 04:57:37.437352 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.437489 kubelet[2706]: E0509 04:57:37.437479 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.437515 kubelet[2706]: W0509 04:57:37.437488 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.437515 kubelet[2706]: E0509 04:57:37.437496 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.437642 kubelet[2706]: E0509 04:57:37.437631 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.437665 kubelet[2706]: W0509 04:57:37.437642 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.437665 kubelet[2706]: E0509 04:57:37.437650 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.437782 kubelet[2706]: E0509 04:57:37.437772 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.437808 kubelet[2706]: W0509 04:57:37.437781 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.437808 kubelet[2706]: E0509 04:57:37.437789 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.437928 kubelet[2706]: E0509 04:57:37.437916 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.437928 kubelet[2706]: W0509 04:57:37.437926 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.437975 kubelet[2706]: E0509 04:57:37.437934 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.438138 kubelet[2706]: E0509 04:57:37.438113 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.438138 kubelet[2706]: W0509 04:57:37.438124 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.438138 kubelet[2706]: E0509 04:57:37.438131 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.438280 kubelet[2706]: E0509 04:57:37.438269 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.438280 kubelet[2706]: W0509 04:57:37.438278 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.438338 kubelet[2706]: E0509 04:57:37.438285 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.438492 kubelet[2706]: E0509 04:57:37.438481 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.438492 kubelet[2706]: W0509 04:57:37.438490 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.438537 kubelet[2706]: E0509 04:57:37.438498 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.438635 kubelet[2706]: E0509 04:57:37.438626 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.438662 kubelet[2706]: W0509 04:57:37.438635 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.438662 kubelet[2706]: E0509 04:57:37.438642 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.452159 kubelet[2706]: E0509 04:57:37.452126 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.452159 kubelet[2706]: W0509 04:57:37.452151 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.452159 kubelet[2706]: E0509 04:57:37.452168 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.452462 kubelet[2706]: E0509 04:57:37.452448 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.452462 kubelet[2706]: W0509 04:57:37.452460 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.452511 kubelet[2706]: E0509 04:57:37.452478 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.452712 kubelet[2706]: E0509 04:57:37.452697 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.452712 kubelet[2706]: W0509 04:57:37.452709 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.452768 kubelet[2706]: E0509 04:57:37.452731 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.452933 kubelet[2706]: E0509 04:57:37.452907 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.452933 kubelet[2706]: W0509 04:57:37.452918 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.452982 kubelet[2706]: E0509 04:57:37.452930 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.453088 kubelet[2706]: E0509 04:57:37.453071 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.453111 kubelet[2706]: W0509 04:57:37.453088 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.453111 kubelet[2706]: E0509 04:57:37.453099 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.453299 kubelet[2706]: E0509 04:57:37.453286 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.453330 kubelet[2706]: W0509 04:57:37.453298 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.453330 kubelet[2706]: E0509 04:57:37.453310 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.453590 kubelet[2706]: E0509 04:57:37.453570 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.453624 kubelet[2706]: W0509 04:57:37.453589 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.453624 kubelet[2706]: E0509 04:57:37.453609 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.453791 kubelet[2706]: E0509 04:57:37.453779 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.453791 kubelet[2706]: W0509 04:57:37.453789 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.453854 kubelet[2706]: E0509 04:57:37.453810 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.453990 kubelet[2706]: E0509 04:57:37.453979 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.453990 kubelet[2706]: W0509 04:57:37.453988 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.454045 kubelet[2706]: E0509 04:57:37.454034 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.454163 kubelet[2706]: E0509 04:57:37.454153 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.454191 kubelet[2706]: W0509 04:57:37.454163 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.454191 kubelet[2706]: E0509 04:57:37.454175 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.454469 kubelet[2706]: E0509 04:57:37.454440 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.454469 kubelet[2706]: W0509 04:57:37.454456 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.454469 kubelet[2706]: E0509 04:57:37.454470 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.454653 kubelet[2706]: E0509 04:57:37.454640 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.454678 kubelet[2706]: W0509 04:57:37.454652 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.454678 kubelet[2706]: E0509 04:57:37.454665 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.455145 kubelet[2706]: E0509 04:57:37.455122 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.455145 kubelet[2706]: W0509 04:57:37.455138 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.455204 kubelet[2706]: E0509 04:57:37.455150 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.455409 kubelet[2706]: E0509 04:57:37.455394 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.455430 kubelet[2706]: W0509 04:57:37.455408 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.455430 kubelet[2706]: E0509 04:57:37.455418 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.455575 kubelet[2706]: E0509 04:57:37.455564 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.455575 kubelet[2706]: W0509 04:57:37.455575 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.455618 kubelet[2706]: E0509 04:57:37.455583 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.455731 kubelet[2706]: E0509 04:57:37.455720 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.455752 kubelet[2706]: W0509 04:57:37.455731 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.455752 kubelet[2706]: E0509 04:57:37.455740 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.455914 kubelet[2706]: E0509 04:57:37.455902 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.455914 kubelet[2706]: W0509 04:57:37.455913 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.455960 kubelet[2706]: E0509 04:57:37.455922 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.456253 kubelet[2706]: E0509 04:57:37.456240 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 04:57:37.456276 kubelet[2706]: W0509 04:57:37.456254 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 04:57:37.456276 kubelet[2706]: E0509 04:57:37.456263 2706 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 04:57:37.803465 containerd[1496]: time="2025-05-09T04:57:37.802869212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:37.804003 containerd[1496]: time="2025-05-09T04:57:37.803978429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 9 04:57:37.804623 containerd[1496]: time="2025-05-09T04:57:37.804588567Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:37.806331 containerd[1496]: time="2025-05-09T04:57:37.806298800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:37.807463 containerd[1496]: time="2025-05-09T04:57:37.807252632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.120295958s" May 9 04:57:37.807463 containerd[1496]: time="2025-05-09T04:57:37.807283477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 9 04:57:37.813477 containerd[1496]: time="2025-05-09T04:57:37.813252991Z" level=info msg="CreateContainer within sandbox \"182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 04:57:37.820469 containerd[1496]: time="2025-05-09T04:57:37.820433099Z" level=info msg="Container d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:37.827545 containerd[1496]: time="2025-05-09T04:57:37.827503029Z" level=info msg="CreateContainer within sandbox \"182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246\"" May 9 04:57:37.828181 containerd[1496]: time="2025-05-09T04:57:37.828133370Z" level=info msg="StartContainer for \"d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246\"" May 9 04:57:37.830098 containerd[1496]: time="2025-05-09T04:57:37.830066399Z" level=info msg="connecting to shim d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246" address="unix:///run/containerd/s/dc2ea9b5db63a8aa65ea3a2f3769189b1d1c36eb12d7caa8b08f46551570df9e" protocol=ttrpc version=3 May 9 04:57:37.854423 systemd[1]: Started cri-containerd-d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246.scope - libcontainer container d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246. May 9 04:57:37.890957 containerd[1496]: time="2025-05-09T04:57:37.890906483Z" level=info msg="StartContainer for \"d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246\" returns successfully" May 9 04:57:37.914850 systemd[1]: cri-containerd-d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246.scope: Deactivated successfully. May 9 04:57:37.941838 containerd[1496]: time="2025-05-09T04:57:37.941770652Z" level=info msg="received exit event container_id:\"d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246\" id:\"d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246\" pid:3327 exited_at:{seconds:1746766657 nanos:930085345}" May 9 04:57:37.949246 containerd[1496]: time="2025-05-09T04:57:37.948810417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246\" id:\"d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246\" pid:3327 exited_at:{seconds:1746766657 nanos:930085345}" May 9 04:57:37.991629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d34b99a1a080471d551b63941b2f2edc02a5e8bb4f1c4de22c8fcf85d427a246-rootfs.mount: Deactivated successfully. May 9 04:57:38.295716 kubelet[2706]: E0509 04:57:38.295675 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbccr" podUID="7d231f9b-cbef-416a-93ac-f825fa0ec566" May 9 04:57:38.367229 containerd[1496]: time="2025-05-09T04:57:38.366935565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 04:57:38.370397 kubelet[2706]: I0509 04:57:38.370356 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 04:57:40.296316 kubelet[2706]: E0509 04:57:40.296274 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbccr" podUID="7d231f9b-cbef-416a-93ac-f825fa0ec566" May 9 04:57:42.052562 containerd[1496]: time="2025-05-09T04:57:42.052520290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:42.053307 containerd[1496]: time="2025-05-09T04:57:42.053229742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 9 04:57:42.054133 containerd[1496]: time="2025-05-09T04:57:42.053770531Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:42.055638 containerd[1496]: time="2025-05-09T04:57:42.055611209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:42.056229 containerd[1496]: time="2025-05-09T04:57:42.056181642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.68919431s" May 9 04:57:42.056357 containerd[1496]: time="2025-05-09T04:57:42.056340223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 9 04:57:42.058331 containerd[1496]: time="2025-05-09T04:57:42.058301075Z" level=info msg="CreateContainer within sandbox \"182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 04:57:42.070344 containerd[1496]: time="2025-05-09T04:57:42.069313375Z" level=info msg="Container 14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:42.077004 containerd[1496]: time="2025-05-09T04:57:42.076957441Z" level=info msg="CreateContainer within sandbox \"182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433\"" May 9 04:57:42.077538 containerd[1496]: time="2025-05-09T04:57:42.077501751Z" level=info msg="StartContainer for \"14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433\"" May 9 04:57:42.078790 containerd[1496]: time="2025-05-09T04:57:42.078762633Z" level=info msg="connecting to shim 14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433" address="unix:///run/containerd/s/dc2ea9b5db63a8aa65ea3a2f3769189b1d1c36eb12d7caa8b08f46551570df9e" protocol=ttrpc version=3 May 9 04:57:42.104339 systemd[1]: Started cri-containerd-14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433.scope - libcontainer container 14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433. May 9 04:57:42.138307 containerd[1496]: time="2025-05-09T04:57:42.138270185Z" level=info msg="StartContainer for \"14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433\" returns successfully" May 9 04:57:42.296214 kubelet[2706]: E0509 04:57:42.296156 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bbccr" podUID="7d231f9b-cbef-416a-93ac-f825fa0ec566" May 9 04:57:42.738829 containerd[1496]: time="2025-05-09T04:57:42.738779724Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 04:57:42.741613 systemd[1]: cri-containerd-14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433.scope: Deactivated successfully. May 9 04:57:42.741924 systemd[1]: cri-containerd-14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433.scope: Consumed 471ms CPU time, 160.7M memory peak, 48K read from disk, 150.3M written to disk. May 9 04:57:42.745734 containerd[1496]: time="2025-05-09T04:57:42.744447534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433\" id:\"14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433\" pid:3386 exited_at:{seconds:1746766662 nanos:743995396}" May 9 04:57:42.745734 containerd[1496]: time="2025-05-09T04:57:42.745584441Z" level=info msg="received exit event container_id:\"14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433\" id:\"14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433\" pid:3386 exited_at:{seconds:1746766662 nanos:743995396}" May 9 04:57:42.762554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14d5d933d78b8be75d94f3c744230246055389f6c958b1dd00e1168a7c3ce433-rootfs.mount: Deactivated successfully. May 9 04:57:42.796005 kubelet[2706]: I0509 04:57:42.795976 2706 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 04:57:42.816461 kubelet[2706]: I0509 04:57:42.816163 2706 topology_manager.go:215] "Topology Admit Handler" podUID="fe86203c-44de-468e-9a1f-11db50f9ec22" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nzgn4" May 9 04:57:42.817265 kubelet[2706]: I0509 04:57:42.817063 2706 topology_manager.go:215] "Topology Admit Handler" podUID="c11b3bed-2237-41a6-a4b0-e6f731c98df3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kkznp" May 9 04:57:42.817969 kubelet[2706]: I0509 04:57:42.817946 2706 topology_manager.go:215] "Topology Admit Handler" podUID="e14efcfa-b02b-4469-8e9c-9cad29d3a7b6" podNamespace="calico-apiserver" podName="calico-apiserver-85b8bfbd84-rt7bb" May 9 04:57:42.820391 kubelet[2706]: I0509 04:57:42.820352 2706 topology_manager.go:215] "Topology Admit Handler" podUID="8dbd8cf3-ef09-4192-885a-6d0344b32f46" podNamespace="calico-system" podName="calico-kube-controllers-7959888855-znzll" May 9 04:57:42.822567 kubelet[2706]: I0509 04:57:42.822524 2706 topology_manager.go:215] "Topology Admit Handler" podUID="14c5fd8f-4443-4b4f-a1a7-03b39d4ec063" podNamespace="calico-apiserver" podName="calico-apiserver-85b8bfbd84-j6xc6" May 9 04:57:42.830396 systemd[1]: Created slice kubepods-burstable-podfe86203c_44de_468e_9a1f_11db50f9ec22.slice - libcontainer container kubepods-burstable-podfe86203c_44de_468e_9a1f_11db50f9ec22.slice. May 9 04:57:42.840023 systemd[1]: Created slice kubepods-burstable-podc11b3bed_2237_41a6_a4b0_e6f731c98df3.slice - libcontainer container kubepods-burstable-podc11b3bed_2237_41a6_a4b0_e6f731c98df3.slice. May 9 04:57:42.847418 systemd[1]: Created slice kubepods-besteffort-pode14efcfa_b02b_4469_8e9c_9cad29d3a7b6.slice - libcontainer container kubepods-besteffort-pode14efcfa_b02b_4469_8e9c_9cad29d3a7b6.slice. May 9 04:57:42.856185 systemd[1]: Created slice kubepods-besteffort-pod8dbd8cf3_ef09_4192_885a_6d0344b32f46.slice - libcontainer container kubepods-besteffort-pod8dbd8cf3_ef09_4192_885a_6d0344b32f46.slice. May 9 04:57:42.864993 systemd[1]: Created slice kubepods-besteffort-pod14c5fd8f_4443_4b4f_a1a7_03b39d4ec063.slice - libcontainer container kubepods-besteffort-pod14c5fd8f_4443_4b4f_a1a7_03b39d4ec063.slice. May 9 04:57:42.893462 kubelet[2706]: I0509 04:57:42.893411 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxcbv\" (UniqueName: \"kubernetes.io/projected/c11b3bed-2237-41a6-a4b0-e6f731c98df3-kube-api-access-lxcbv\") pod \"coredns-7db6d8ff4d-kkznp\" (UID: \"c11b3bed-2237-41a6-a4b0-e6f731c98df3\") " pod="kube-system/coredns-7db6d8ff4d-kkznp" May 9 04:57:42.893462 kubelet[2706]: I0509 04:57:42.893457 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe86203c-44de-468e-9a1f-11db50f9ec22-config-volume\") pod \"coredns-7db6d8ff4d-nzgn4\" (UID: \"fe86203c-44de-468e-9a1f-11db50f9ec22\") " pod="kube-system/coredns-7db6d8ff4d-nzgn4" May 9 04:57:42.893627 kubelet[2706]: I0509 04:57:42.893476 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14c5fd8f-4443-4b4f-a1a7-03b39d4ec063-calico-apiserver-certs\") pod \"calico-apiserver-85b8bfbd84-j6xc6\" (UID: \"14c5fd8f-4443-4b4f-a1a7-03b39d4ec063\") " pod="calico-apiserver/calico-apiserver-85b8bfbd84-j6xc6" May 9 04:57:42.893627 kubelet[2706]: I0509 04:57:42.893494 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-854rk\" (UniqueName: \"kubernetes.io/projected/14c5fd8f-4443-4b4f-a1a7-03b39d4ec063-kube-api-access-854rk\") pod \"calico-apiserver-85b8bfbd84-j6xc6\" (UID: \"14c5fd8f-4443-4b4f-a1a7-03b39d4ec063\") " pod="calico-apiserver/calico-apiserver-85b8bfbd84-j6xc6" May 9 04:57:42.893627 kubelet[2706]: I0509 04:57:42.893512 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqsk\" (UniqueName: \"kubernetes.io/projected/fe86203c-44de-468e-9a1f-11db50f9ec22-kube-api-access-zqqsk\") pod \"coredns-7db6d8ff4d-nzgn4\" (UID: \"fe86203c-44de-468e-9a1f-11db50f9ec22\") " pod="kube-system/coredns-7db6d8ff4d-nzgn4" May 9 04:57:42.893627 kubelet[2706]: I0509 04:57:42.893530 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dbd8cf3-ef09-4192-885a-6d0344b32f46-tigera-ca-bundle\") pod \"calico-kube-controllers-7959888855-znzll\" (UID: \"8dbd8cf3-ef09-4192-885a-6d0344b32f46\") " pod="calico-system/calico-kube-controllers-7959888855-znzll" May 9 04:57:42.893627 kubelet[2706]: I0509 04:57:42.893548 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c11b3bed-2237-41a6-a4b0-e6f731c98df3-config-volume\") pod \"coredns-7db6d8ff4d-kkznp\" (UID: \"c11b3bed-2237-41a6-a4b0-e6f731c98df3\") " pod="kube-system/coredns-7db6d8ff4d-kkznp" May 9 04:57:42.893738 kubelet[2706]: I0509 04:57:42.893572 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw7hj\" (UniqueName: \"kubernetes.io/projected/8dbd8cf3-ef09-4192-885a-6d0344b32f46-kube-api-access-sw7hj\") pod \"calico-kube-controllers-7959888855-znzll\" (UID: \"8dbd8cf3-ef09-4192-885a-6d0344b32f46\") " pod="calico-system/calico-kube-controllers-7959888855-znzll" May 9 04:57:42.893738 kubelet[2706]: I0509 04:57:42.893598 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e14efcfa-b02b-4469-8e9c-9cad29d3a7b6-calico-apiserver-certs\") pod \"calico-apiserver-85b8bfbd84-rt7bb\" (UID: \"e14efcfa-b02b-4469-8e9c-9cad29d3a7b6\") " pod="calico-apiserver/calico-apiserver-85b8bfbd84-rt7bb" May 9 04:57:42.893738 kubelet[2706]: I0509 04:57:42.893617 2706 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7dm5\" (UniqueName: \"kubernetes.io/projected/e14efcfa-b02b-4469-8e9c-9cad29d3a7b6-kube-api-access-q7dm5\") pod \"calico-apiserver-85b8bfbd84-rt7bb\" (UID: \"e14efcfa-b02b-4469-8e9c-9cad29d3a7b6\") " pod="calico-apiserver/calico-apiserver-85b8bfbd84-rt7bb" May 9 04:57:43.139511 containerd[1496]: time="2025-05-09T04:57:43.138692379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzgn4,Uid:fe86203c-44de-468e-9a1f-11db50f9ec22,Namespace:kube-system,Attempt:0,}" May 9 04:57:43.145798 containerd[1496]: time="2025-05-09T04:57:43.145753373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kkznp,Uid:c11b3bed-2237-41a6-a4b0-e6f731c98df3,Namespace:kube-system,Attempt:0,}" May 9 04:57:43.160418 containerd[1496]: time="2025-05-09T04:57:43.160349741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b8bfbd84-rt7bb,Uid:e14efcfa-b02b-4469-8e9c-9cad29d3a7b6,Namespace:calico-apiserver,Attempt:0,}" May 9 04:57:43.169967 containerd[1496]: time="2025-05-09T04:57:43.169803152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b8bfbd84-j6xc6,Uid:14c5fd8f-4443-4b4f-a1a7-03b39d4ec063,Namespace:calico-apiserver,Attempt:0,}" May 9 04:57:43.177337 containerd[1496]: time="2025-05-09T04:57:43.177305081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7959888855-znzll,Uid:8dbd8cf3-ef09-4192-885a-6d0344b32f46,Namespace:calico-system,Attempt:0,}" May 9 04:57:43.422251 containerd[1496]: time="2025-05-09T04:57:43.420870526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 04:57:43.683920 containerd[1496]: time="2025-05-09T04:57:43.683672514Z" level=error msg="Failed to destroy network for sandbox \"bcd166a62b4441b9ebbcee2098e82edd0d7bcd98f346ec4875784543a4d0c0af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.685522 containerd[1496]: time="2025-05-09T04:57:43.685300195Z" level=error msg="Failed to destroy network for sandbox \"0dec4eef6d25e32fdc758f06caa39d442fe341ecfb59ba207f58732415acd32e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.686306 containerd[1496]: time="2025-05-09T04:57:43.686151261Z" level=error msg="Failed to destroy network for sandbox \"db46c4b372744fb8e652f584c36f2fd080142d017d0c8725ea5d20aea7ee3462\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.692380 containerd[1496]: time="2025-05-09T04:57:43.692342268Z" level=error msg="Failed to destroy network for sandbox \"d94e7786ab0c956a6c7f8735d651b3b46deebf727ae7df5f93dc077f59e61b08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.692603 containerd[1496]: time="2025-05-09T04:57:43.692386993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b8bfbd84-rt7bb,Uid:e14efcfa-b02b-4469-8e9c-9cad29d3a7b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcd166a62b4441b9ebbcee2098e82edd0d7bcd98f346ec4875784543a4d0c0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.693231 kubelet[2706]: E0509 04:57:43.692830 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcd166a62b4441b9ebbcee2098e82edd0d7bcd98f346ec4875784543a4d0c0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.693231 kubelet[2706]: E0509 04:57:43.692913 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcd166a62b4441b9ebbcee2098e82edd0d7bcd98f346ec4875784543a4d0c0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85b8bfbd84-rt7bb" May 9 04:57:43.693231 kubelet[2706]: E0509 04:57:43.692934 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcd166a62b4441b9ebbcee2098e82edd0d7bcd98f346ec4875784543a4d0c0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85b8bfbd84-rt7bb" May 9 04:57:43.693590 kubelet[2706]: E0509 04:57:43.692974 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85b8bfbd84-rt7bb_calico-apiserver(e14efcfa-b02b-4469-8e9c-9cad29d3a7b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85b8bfbd84-rt7bb_calico-apiserver(e14efcfa-b02b-4469-8e9c-9cad29d3a7b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcd166a62b4441b9ebbcee2098e82edd0d7bcd98f346ec4875784543a4d0c0af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85b8bfbd84-rt7bb" podUID="e14efcfa-b02b-4469-8e9c-9cad29d3a7b6" May 9 04:57:43.693974 containerd[1496]: time="2025-05-09T04:57:43.693929584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzgn4,Uid:fe86203c-44de-468e-9a1f-11db50f9ec22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dec4eef6d25e32fdc758f06caa39d442fe341ecfb59ba207f58732415acd32e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.694113 kubelet[2706]: E0509 04:57:43.694076 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dec4eef6d25e32fdc758f06caa39d442fe341ecfb59ba207f58732415acd32e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.694156 kubelet[2706]: E0509 04:57:43.694116 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dec4eef6d25e32fdc758f06caa39d442fe341ecfb59ba207f58732415acd32e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzgn4" May 9 04:57:43.694156 kubelet[2706]: E0509 04:57:43.694132 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dec4eef6d25e32fdc758f06caa39d442fe341ecfb59ba207f58732415acd32e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzgn4" May 9 04:57:43.694247 kubelet[2706]: E0509 04:57:43.694163 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzgn4_kube-system(fe86203c-44de-468e-9a1f-11db50f9ec22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzgn4_kube-system(fe86203c-44de-468e-9a1f-11db50f9ec22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dec4eef6d25e32fdc758f06caa39d442fe341ecfb59ba207f58732415acd32e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzgn4" podUID="fe86203c-44de-468e-9a1f-11db50f9ec22" May 9 04:57:43.695522 containerd[1496]: time="2025-05-09T04:57:43.695489337Z" level=error msg="Failed to destroy network for sandbox \"20a4077c1ae15907311ffd58dfe4b6701a344254109e7267410ac29159080df4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.697256 containerd[1496]: time="2025-05-09T04:57:43.697133901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b8bfbd84-j6xc6,Uid:14c5fd8f-4443-4b4f-a1a7-03b39d4ec063,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db46c4b372744fb8e652f584c36f2fd080142d017d0c8725ea5d20aea7ee3462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.697740 kubelet[2706]: E0509 04:57:43.697660 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db46c4b372744fb8e652f584c36f2fd080142d017d0c8725ea5d20aea7ee3462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.697740 kubelet[2706]: E0509 04:57:43.697734 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db46c4b372744fb8e652f584c36f2fd080142d017d0c8725ea5d20aea7ee3462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85b8bfbd84-j6xc6" May 9 04:57:43.697840 kubelet[2706]: E0509 04:57:43.697751 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db46c4b372744fb8e652f584c36f2fd080142d017d0c8725ea5d20aea7ee3462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85b8bfbd84-j6xc6" May 9 04:57:43.697876 kubelet[2706]: E0509 04:57:43.697827 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85b8bfbd84-j6xc6_calico-apiserver(14c5fd8f-4443-4b4f-a1a7-03b39d4ec063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85b8bfbd84-j6xc6_calico-apiserver(14c5fd8f-4443-4b4f-a1a7-03b39d4ec063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db46c4b372744fb8e652f584c36f2fd080142d017d0c8725ea5d20aea7ee3462\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85b8bfbd84-j6xc6" podUID="14c5fd8f-4443-4b4f-a1a7-03b39d4ec063" May 9 04:57:43.700518 containerd[1496]: time="2025-05-09T04:57:43.700296373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7959888855-znzll,Uid:8dbd8cf3-ef09-4192-885a-6d0344b32f46,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d94e7786ab0c956a6c7f8735d651b3b46deebf727ae7df5f93dc077f59e61b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.700698 kubelet[2706]: E0509 04:57:43.700666 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d94e7786ab0c956a6c7f8735d651b3b46deebf727ae7df5f93dc077f59e61b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.700737 kubelet[2706]: E0509 04:57:43.700704 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d94e7786ab0c956a6c7f8735d651b3b46deebf727ae7df5f93dc077f59e61b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7959888855-znzll" May 9 04:57:43.700737 kubelet[2706]: E0509 04:57:43.700722 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d94e7786ab0c956a6c7f8735d651b3b46deebf727ae7df5f93dc077f59e61b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7959888855-znzll" May 9 04:57:43.700826 kubelet[2706]: E0509 04:57:43.700749 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7959888855-znzll_calico-system(8dbd8cf3-ef09-4192-885a-6d0344b32f46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7959888855-znzll_calico-system(8dbd8cf3-ef09-4192-885a-6d0344b32f46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d94e7786ab0c956a6c7f8735d651b3b46deebf727ae7df5f93dc077f59e61b08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7959888855-znzll" podUID="8dbd8cf3-ef09-4192-885a-6d0344b32f46" May 9 04:57:43.708289 containerd[1496]: time="2025-05-09T04:57:43.708236196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kkznp,Uid:c11b3bed-2237-41a6-a4b0-e6f731c98df3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a4077c1ae15907311ffd58dfe4b6701a344254109e7267410ac29159080df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.708672 kubelet[2706]: E0509 04:57:43.708617 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a4077c1ae15907311ffd58dfe4b6701a344254109e7267410ac29159080df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:43.708738 kubelet[2706]: E0509 04:57:43.708688 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a4077c1ae15907311ffd58dfe4b6701a344254109e7267410ac29159080df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kkznp" May 9 04:57:43.708738 kubelet[2706]: E0509 04:57:43.708706 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a4077c1ae15907311ffd58dfe4b6701a344254109e7267410ac29159080df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kkznp" May 9 04:57:43.708785 kubelet[2706]: E0509 04:57:43.708750 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kkznp_kube-system(c11b3bed-2237-41a6-a4b0-e6f731c98df3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kkznp_kube-system(c11b3bed-2237-41a6-a4b0-e6f731c98df3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20a4077c1ae15907311ffd58dfe4b6701a344254109e7267410ac29159080df4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kkznp" podUID="c11b3bed-2237-41a6-a4b0-e6f731c98df3" May 9 04:57:44.071033 systemd[1]: run-netns-cni\x2d6413a279\x2dea13\x2d354a\x2de6ef\x2db0955b966390.mount: Deactivated successfully. May 9 04:57:44.071128 systemd[1]: run-netns-cni\x2d63b6b5c8\x2d4805\x2ded8d\x2d510a\x2d0362d0147de1.mount: Deactivated successfully. May 9 04:57:44.071176 systemd[1]: run-netns-cni\x2dee5dda0c\x2d77f4\x2d47c5\x2d20ea\x2d6f70b79a607e.mount: Deactivated successfully. May 9 04:57:44.071239 systemd[1]: run-netns-cni\x2d82b75b0f\x2ddd77\x2daf9d\x2d732e\x2da1188c3fc110.mount: Deactivated successfully. May 9 04:57:44.302679 systemd[1]: Created slice kubepods-besteffort-pod7d231f9b_cbef_416a_93ac_f825fa0ec566.slice - libcontainer container kubepods-besteffort-pod7d231f9b_cbef_416a_93ac_f825fa0ec566.slice. May 9 04:57:44.305005 containerd[1496]: time="2025-05-09T04:57:44.304744465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bbccr,Uid:7d231f9b-cbef-416a-93ac-f825fa0ec566,Namespace:calico-system,Attempt:0,}" May 9 04:57:44.357285 containerd[1496]: time="2025-05-09T04:57:44.355090141Z" level=error msg="Failed to destroy network for sandbox \"cf7a55bfa96d184b92d71f6dd3f0a5b331a91eca01f5eb6f5510eb963969e80d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:44.356828 systemd[1]: run-netns-cni\x2d52ae4d43\x2d0f07\x2dd419\x2d7abb\x2d35345f2bc6a6.mount: Deactivated successfully. May 9 04:57:44.358186 containerd[1496]: time="2025-05-09T04:57:44.356897996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bbccr,Uid:7d231f9b-cbef-416a-93ac-f825fa0ec566,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf7a55bfa96d184b92d71f6dd3f0a5b331a91eca01f5eb6f5510eb963969e80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:44.358422 kubelet[2706]: E0509 04:57:44.358384 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf7a55bfa96d184b92d71f6dd3f0a5b331a91eca01f5eb6f5510eb963969e80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 04:57:44.358480 kubelet[2706]: E0509 04:57:44.358442 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf7a55bfa96d184b92d71f6dd3f0a5b331a91eca01f5eb6f5510eb963969e80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bbccr" May 9 04:57:44.358480 kubelet[2706]: E0509 04:57:44.358461 2706 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf7a55bfa96d184b92d71f6dd3f0a5b331a91eca01f5eb6f5510eb963969e80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bbccr" May 9 04:57:44.358526 kubelet[2706]: E0509 04:57:44.358505 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bbccr_calico-system(7d231f9b-cbef-416a-93ac-f825fa0ec566)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bbccr_calico-system(7d231f9b-cbef-416a-93ac-f825fa0ec566)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf7a55bfa96d184b92d71f6dd3f0a5b331a91eca01f5eb6f5510eb963969e80d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bbccr" podUID="7d231f9b-cbef-416a-93ac-f825fa0ec566" May 9 04:57:45.145871 systemd[1]: Started sshd@7-10.0.0.63:22-10.0.0.1:34260.service - OpenSSH per-connection server daemon (10.0.0.1:34260). May 9 04:57:45.208575 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 34260 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:57:45.209806 sshd-session[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:57:45.214297 systemd-logind[1470]: New session 8 of user core. May 9 04:57:45.221362 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 04:57:45.354351 sshd[3643]: Connection closed by 10.0.0.1 port 34260 May 9 04:57:45.354569 sshd-session[3641]: pam_unix(sshd:session): session closed for user core May 9 04:57:45.358316 systemd[1]: sshd@7-10.0.0.63:22-10.0.0.1:34260.service: Deactivated successfully. May 9 04:57:45.361736 systemd[1]: session-8.scope: Deactivated successfully. May 9 04:57:45.362734 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. May 9 04:57:45.363917 systemd-logind[1470]: Removed session 8. May 9 04:57:45.739064 kubelet[2706]: I0509 04:57:45.738758 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 04:57:47.359936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930412244.mount: Deactivated successfully. May 9 04:57:47.643918 containerd[1496]: time="2025-05-09T04:57:47.643753955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:47.646185 containerd[1496]: time="2025-05-09T04:57:47.645123661Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:47.680068 containerd[1496]: time="2025-05-09T04:57:47.653918318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 9 04:57:47.681327 containerd[1496]: time="2025-05-09T04:57:47.656633887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:47.681327 containerd[1496]: time="2025-05-09T04:57:47.657400129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.236442313s" May 9 04:57:47.681327 containerd[1496]: time="2025-05-09T04:57:47.680460626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 9 04:57:47.696983 containerd[1496]: time="2025-05-09T04:57:47.696926260Z" level=info msg="CreateContainer within sandbox \"182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 04:57:47.885316 containerd[1496]: time="2025-05-09T04:57:47.884128764Z" level=info msg="Container 165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:47.900059 containerd[1496]: time="2025-05-09T04:57:47.899938088Z" level=info msg="CreateContainer within sandbox \"182da50fc37bd4b8fca5101d6719e939158cdcb6d9c6c05ad60c85d79a8d3de3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c\"" May 9 04:57:47.900825 containerd[1496]: time="2025-05-09T04:57:47.900779058Z" level=info msg="StartContainer for \"165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c\"" May 9 04:57:47.902751 containerd[1496]: time="2025-05-09T04:57:47.902717464Z" level=info msg="connecting to shim 165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c" address="unix:///run/containerd/s/dc2ea9b5db63a8aa65ea3a2f3769189b1d1c36eb12d7caa8b08f46551570df9e" protocol=ttrpc version=3 May 9 04:57:47.923384 systemd[1]: Started cri-containerd-165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c.scope - libcontainer container 165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c. May 9 04:57:47.961650 containerd[1496]: time="2025-05-09T04:57:47.961597977Z" level=info msg="StartContainer for \"165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c\" returns successfully" May 9 04:57:48.139212 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 04:57:48.139342 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 04:57:48.453173 kubelet[2706]: I0509 04:57:48.452798 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g6dpj" podStartSLOduration=1.624984425 podStartE2EDuration="14.45278109s" podCreationTimestamp="2025-05-09 04:57:34 +0000 UTC" firstStartedPulling="2025-05-09 04:57:34.858632477 +0000 UTC m=+20.647248344" lastFinishedPulling="2025-05-09 04:57:47.686429182 +0000 UTC m=+33.475045009" observedRunningTime="2025-05-09 04:57:48.451531641 +0000 UTC m=+34.240147548" watchObservedRunningTime="2025-05-09 04:57:48.45278109 +0000 UTC m=+34.241396957" May 9 04:57:49.445275 kubelet[2706]: I0509 04:57:49.442170 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 04:57:49.613235 kernel: bpftool[3862]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 04:57:49.767099 systemd-networkd[1402]: vxlan.calico: Link UP May 9 04:57:49.767107 systemd-networkd[1402]: vxlan.calico: Gained carrier May 9 04:57:50.369837 systemd[1]: Started sshd@8-10.0.0.63:22-10.0.0.1:34262.service - OpenSSH per-connection server daemon (10.0.0.1:34262). May 9 04:57:50.432090 sshd[3930]: Accepted publickey for core from 10.0.0.1 port 34262 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:57:50.433533 sshd-session[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:57:50.439542 systemd-logind[1470]: New session 9 of user core. May 9 04:57:50.443705 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 04:57:50.609680 sshd[3933]: Connection closed by 10.0.0.1 port 34262 May 9 04:57:50.610059 sshd-session[3930]: pam_unix(sshd:session): session closed for user core May 9 04:57:50.618727 systemd[1]: sshd@8-10.0.0.63:22-10.0.0.1:34262.service: Deactivated successfully. May 9 04:57:50.620814 systemd[1]: session-9.scope: Deactivated successfully. May 9 04:57:50.622020 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. May 9 04:57:50.622942 systemd-logind[1470]: Removed session 9. May 9 04:57:51.134406 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL May 9 04:57:54.299062 containerd[1496]: time="2025-05-09T04:57:54.297908860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7959888855-znzll,Uid:8dbd8cf3-ef09-4192-885a-6d0344b32f46,Namespace:calico-system,Attempt:0,}" May 9 04:57:54.302901 containerd[1496]: time="2025-05-09T04:57:54.302865002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kkznp,Uid:c11b3bed-2237-41a6-a4b0-e6f731c98df3,Namespace:kube-system,Attempt:0,}" May 9 04:57:54.575310 systemd-networkd[1402]: cali3a9180fbce3: Link UP May 9 04:57:54.576465 systemd-networkd[1402]: cali3a9180fbce3: Gained carrier May 9 04:57:54.590973 containerd[1496]: 2025-05-09 04:57:54.386 [INFO][3958] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0 coredns-7db6d8ff4d- kube-system c11b3bed-2237-41a6-a4b0-e6f731c98df3 676 0 2025-05-09 04:57:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-kkznp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a9180fbce3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kkznp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kkznp-" May 9 04:57:54.590973 containerd[1496]: 2025-05-09 04:57:54.386 [INFO][3958] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kkznp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" May 9 04:57:54.590973 containerd[1496]: 2025-05-09 04:57:54.518 [INFO][3979] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" HandleID="k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Workload="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.538 [INFO][3979] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" HandleID="k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Workload="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001337f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-kkznp", "timestamp":"2025-05-09 04:57:54.518183341 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.539 [INFO][3979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.539 [INFO][3979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.539 [INFO][3979] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.540 [INFO][3979] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" host="localhost" May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.547 [INFO][3979] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.551 [INFO][3979] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.552 [INFO][3979] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.554 [INFO][3979] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 04:57:54.591183 containerd[1496]: 2025-05-09 04:57:54.554 [INFO][3979] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" host="localhost" May 9 04:57:54.591422 containerd[1496]: 2025-05-09 04:57:54.555 [INFO][3979] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8 May 9 04:57:54.591422 containerd[1496]: 2025-05-09 04:57:54.559 [INFO][3979] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" host="localhost" May 9 04:57:54.591422 containerd[1496]: 2025-05-09 04:57:54.568 [INFO][3979] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" host="localhost" May 9 04:57:54.591422 containerd[1496]: 2025-05-09 04:57:54.568 [INFO][3979] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" host="localhost" May 9 04:57:54.591422 containerd[1496]: 2025-05-09 04:57:54.568 [INFO][3979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 04:57:54.591422 containerd[1496]: 2025-05-09 04:57:54.568 [INFO][3979] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" HandleID="k8s-pod-network.af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Workload="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" May 9 04:57:54.591536 containerd[1496]: 2025-05-09 04:57:54.570 [INFO][3958] cni-plugin/k8s.go 386: Populated endpoint ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kkznp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c11b3bed-2237-41a6-a4b0-e6f731c98df3", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-kkznp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a9180fbce3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:54.591590 containerd[1496]: 2025-05-09 04:57:54.570 [INFO][3958] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kkznp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" May 9 04:57:54.591590 containerd[1496]: 2025-05-09 04:57:54.570 [INFO][3958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a9180fbce3 ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kkznp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" May 9 04:57:54.591590 containerd[1496]: 2025-05-09 04:57:54.575 [INFO][3958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kkznp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" May 9 04:57:54.591658 containerd[1496]: 2025-05-09 04:57:54.575 [INFO][3958] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kkznp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c11b3bed-2237-41a6-a4b0-e6f731c98df3", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8", Pod:"coredns-7db6d8ff4d-kkznp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a9180fbce3", MAC:"8a:55:76:ab:de:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:54.591658 containerd[1496]: 2025-05-09 04:57:54.588 [INFO][3958] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kkznp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kkznp-eth0" May 9 04:57:54.613228 systemd-networkd[1402]: calicdf7fc35b70: Link UP May 9 04:57:54.613421 systemd-networkd[1402]: calicdf7fc35b70: Gained carrier May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.384 [INFO][3949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0 calico-kube-controllers-7959888855- calico-system 8dbd8cf3-ef09-4192-885a-6d0344b32f46 675 0 2025-05-09 04:57:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7959888855 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7959888855-znzll eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicdf7fc35b70 [] []}} ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Namespace="calico-system" Pod="calico-kube-controllers-7959888855-znzll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7959888855--znzll-" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.384 [INFO][3949] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Namespace="calico-system" Pod="calico-kube-controllers-7959888855-znzll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.518 [INFO][3981] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" HandleID="k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Workload="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.540 [INFO][3981] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" HandleID="k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Workload="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b7100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7959888855-znzll", "timestamp":"2025-05-09 04:57:54.518214064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.540 [INFO][3981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.568 [INFO][3981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.568 [INFO][3981] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.570 [INFO][3981] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.577 [INFO][3981] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.583 [INFO][3981] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.587 [INFO][3981] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.594 [INFO][3981] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.594 [INFO][3981] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.596 [INFO][3981] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.600 [INFO][3981] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.606 [INFO][3981] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.606 [INFO][3981] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" host="localhost" May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.606 [INFO][3981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 04:57:54.626115 containerd[1496]: 2025-05-09 04:57:54.606 [INFO][3981] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" HandleID="k8s-pod-network.35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Workload="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" May 9 04:57:54.626794 containerd[1496]: 2025-05-09 04:57:54.609 [INFO][3949] cni-plugin/k8s.go 386: Populated endpoint ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Namespace="calico-system" Pod="calico-kube-controllers-7959888855-znzll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0", GenerateName:"calico-kube-controllers-7959888855-", Namespace:"calico-system", SelfLink:"", UID:"8dbd8cf3-ef09-4192-885a-6d0344b32f46", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7959888855", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7959888855-znzll", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdf7fc35b70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:54.626794 containerd[1496]: 2025-05-09 04:57:54.609 [INFO][3949] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Namespace="calico-system" Pod="calico-kube-controllers-7959888855-znzll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" May 9 04:57:54.626794 containerd[1496]: 2025-05-09 04:57:54.609 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdf7fc35b70 ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Namespace="calico-system" Pod="calico-kube-controllers-7959888855-znzll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" May 9 04:57:54.626794 containerd[1496]: 2025-05-09 04:57:54.613 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Namespace="calico-system" Pod="calico-kube-controllers-7959888855-znzll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" May 9 04:57:54.626794 containerd[1496]: 2025-05-09 04:57:54.614 [INFO][3949] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Namespace="calico-system" Pod="calico-kube-controllers-7959888855-znzll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0", GenerateName:"calico-kube-controllers-7959888855-", Namespace:"calico-system", SelfLink:"", UID:"8dbd8cf3-ef09-4192-885a-6d0344b32f46", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7959888855", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a", Pod:"calico-kube-controllers-7959888855-znzll", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdf7fc35b70", MAC:"36:b2:a3:8d:24:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:54.626794 containerd[1496]: 2025-05-09 04:57:54.624 [INFO][3949] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" Namespace="calico-system" Pod="calico-kube-controllers-7959888855-znzll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7959888855--znzll-eth0" May 9 04:57:54.672097 containerd[1496]: time="2025-05-09T04:57:54.671993322Z" level=info msg="connecting to shim 35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a" address="unix:///run/containerd/s/6b724d83c10a61cf2be1c6ccdb245dd4f0aecc716c84000a7d95703f5ce4da31" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:54.687607 containerd[1496]: time="2025-05-09T04:57:54.687565528Z" level=info msg="connecting to shim af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8" address="unix:///run/containerd/s/fae3d591969d907c14782272f0cbc33c64534ee858ed90af8c8fcea24b769ffa" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:54.696369 systemd[1]: Started cri-containerd-35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a.scope - libcontainer container 35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a. May 9 04:57:54.707728 systemd[1]: Started cri-containerd-af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8.scope - libcontainer container af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8. May 9 04:57:54.712385 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 04:57:54.721752 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 04:57:54.742239 containerd[1496]: time="2025-05-09T04:57:54.742098772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7959888855-znzll,Uid:8dbd8cf3-ef09-4192-885a-6d0344b32f46,Namespace:calico-system,Attempt:0,} returns sandbox id \"35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a\"" May 9 04:57:54.743511 containerd[1496]: time="2025-05-09T04:57:54.743483730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 9 04:57:54.755405 containerd[1496]: time="2025-05-09T04:57:54.755372903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kkznp,Uid:c11b3bed-2237-41a6-a4b0-e6f731c98df3,Namespace:kube-system,Attempt:0,} returns sandbox id \"af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8\"" May 9 04:57:54.758575 containerd[1496]: time="2025-05-09T04:57:54.758391560Z" level=info msg="CreateContainer within sandbox \"af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 04:57:54.767043 containerd[1496]: time="2025-05-09T04:57:54.766996493Z" level=info msg="Container bd5cb59ffe4786ab7fb6051507289c1f3ad4569e7fb6740ccd14405669a4f77a: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:54.772024 containerd[1496]: time="2025-05-09T04:57:54.771981558Z" level=info msg="CreateContainer within sandbox \"af32b9a885f8271f6932d71c91aa5def18e714d504959b9c74644d81dfbf9de8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd5cb59ffe4786ab7fb6051507289c1f3ad4569e7fb6740ccd14405669a4f77a\"" May 9 04:57:54.772703 containerd[1496]: time="2025-05-09T04:57:54.772664016Z" level=info msg="StartContainer for \"bd5cb59ffe4786ab7fb6051507289c1f3ad4569e7fb6740ccd14405669a4f77a\"" May 9 04:57:54.773534 containerd[1496]: time="2025-05-09T04:57:54.773510528Z" level=info msg="connecting to shim bd5cb59ffe4786ab7fb6051507289c1f3ad4569e7fb6740ccd14405669a4f77a" address="unix:///run/containerd/s/fae3d591969d907c14782272f0cbc33c64534ee858ed90af8c8fcea24b769ffa" protocol=ttrpc version=3 May 9 04:57:54.792364 systemd[1]: Started cri-containerd-bd5cb59ffe4786ab7fb6051507289c1f3ad4569e7fb6740ccd14405669a4f77a.scope - libcontainer container bd5cb59ffe4786ab7fb6051507289c1f3ad4569e7fb6740ccd14405669a4f77a. May 9 04:57:54.828284 containerd[1496]: time="2025-05-09T04:57:54.828169063Z" level=info msg="StartContainer for \"bd5cb59ffe4786ab7fb6051507289c1f3ad4569e7fb6740ccd14405669a4f77a\" returns successfully" May 9 04:57:55.466985 kubelet[2706]: I0509 04:57:55.466890 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kkznp" podStartSLOduration=27.466874333 podStartE2EDuration="27.466874333s" podCreationTimestamp="2025-05-09 04:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:57:55.465702996 +0000 UTC m=+41.254318863" watchObservedRunningTime="2025-05-09 04:57:55.466874333 +0000 UTC m=+41.255490200" May 9 04:57:55.625001 systemd[1]: Started sshd@9-10.0.0.63:22-10.0.0.1:44984.service - OpenSSH per-connection server daemon (10.0.0.1:44984). May 9 04:57:55.689957 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 44984 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:57:55.691790 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:57:55.696503 systemd-logind[1470]: New session 10 of user core. May 9 04:57:55.706682 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 04:57:55.741333 systemd-networkd[1402]: cali3a9180fbce3: Gained IPv6LL May 9 04:57:55.901332 sshd[4166]: Connection closed by 10.0.0.1 port 44984 May 9 04:57:55.901687 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 9 04:57:55.914093 systemd[1]: sshd@9-10.0.0.63:22-10.0.0.1:44984.service: Deactivated successfully. May 9 04:57:55.916109 systemd[1]: session-10.scope: Deactivated successfully. May 9 04:57:55.918041 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. May 9 04:57:55.922507 systemd[1]: Started sshd@10-10.0.0.63:22-10.0.0.1:44988.service - OpenSSH per-connection server daemon (10.0.0.1:44988). May 9 04:57:55.923719 systemd-logind[1470]: Removed session 10. May 9 04:57:55.977216 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 44988 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:57:55.978062 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:57:55.983321 systemd-logind[1470]: New session 11 of user core. May 9 04:57:55.995386 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 04:57:56.062815 systemd-networkd[1402]: calicdf7fc35b70: Gained IPv6LL May 9 04:57:56.251033 sshd[4186]: Connection closed by 10.0.0.1 port 44988 May 9 04:57:56.252152 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 9 04:57:56.266741 systemd[1]: sshd@10-10.0.0.63:22-10.0.0.1:44988.service: Deactivated successfully. May 9 04:57:56.269115 systemd[1]: session-11.scope: Deactivated successfully. May 9 04:57:56.275334 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. May 9 04:57:56.278908 systemd[1]: Started sshd@11-10.0.0.63:22-10.0.0.1:45000.service - OpenSSH per-connection server daemon (10.0.0.1:45000). May 9 04:57:56.280506 systemd-logind[1470]: Removed session 11. May 9 04:57:56.300567 containerd[1496]: time="2025-05-09T04:57:56.300533891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzgn4,Uid:fe86203c-44de-468e-9a1f-11db50f9ec22,Namespace:kube-system,Attempt:0,}" May 9 04:57:56.301386 containerd[1496]: time="2025-05-09T04:57:56.300708826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b8bfbd84-j6xc6,Uid:14c5fd8f-4443-4b4f-a1a7-03b39d4ec063,Namespace:calico-apiserver,Attempt:0,}" May 9 04:57:56.346643 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 45000 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:57:56.347229 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:57:56.355914 systemd-logind[1470]: New session 12 of user core. May 9 04:57:56.359338 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 04:57:56.455232 containerd[1496]: time="2025-05-09T04:57:56.455173602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:56.455725 containerd[1496]: time="2025-05-09T04:57:56.455691083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 9 04:57:56.457446 containerd[1496]: time="2025-05-09T04:57:56.457414542Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:56.467799 containerd[1496]: time="2025-05-09T04:57:56.467742695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:56.469621 containerd[1496]: time="2025-05-09T04:57:56.469582204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.726066511s" May 9 04:57:56.469697 containerd[1496]: time="2025-05-09T04:57:56.469623047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 9 04:57:56.484123 containerd[1496]: time="2025-05-09T04:57:56.484081853Z" level=info msg="CreateContainer within sandbox \"35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 9 04:57:56.491934 containerd[1496]: time="2025-05-09T04:57:56.491808356Z" level=info msg="Container 2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:56.516341 containerd[1496]: time="2025-05-09T04:57:56.516091514Z" level=info msg="CreateContainer within sandbox \"35ad94b0dd244a8bd53ba0731695f05b582f62c58bcf383e6c803abc5cb4e71a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2\"" May 9 04:57:56.518516 containerd[1496]: time="2025-05-09T04:57:56.518178242Z" level=info msg="StartContainer for \"2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2\"" May 9 04:57:56.521470 systemd-networkd[1402]: cali305970a204a: Link UP May 9 04:57:56.522169 systemd-networkd[1402]: cali305970a204a: Gained carrier May 9 04:57:56.524606 containerd[1496]: time="2025-05-09T04:57:56.524544916Z" level=info msg="connecting to shim 2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2" address="unix:///run/containerd/s/6b724d83c10a61cf2be1c6ccdb245dd4f0aecc716c84000a7d95703f5ce4da31" protocol=ttrpc version=3 May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.384 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0 coredns-7db6d8ff4d- kube-system fe86203c-44de-468e-9a1f-11db50f9ec22 671 0 2025-05-09 04:57:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-nzgn4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali305970a204a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzgn4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nzgn4-" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.384 [INFO][4200] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzgn4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.448 [INFO][4235] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" HandleID="k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Workload="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.469 [INFO][4235] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" HandleID="k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Workload="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002797f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-nzgn4", "timestamp":"2025-05-09 04:57:56.448542747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.469 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.470 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.470 [INFO][4235] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.474 [INFO][4235] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.485 [INFO][4235] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.489 [INFO][4235] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.491 [INFO][4235] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.496 [INFO][4235] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.498 [INFO][4235] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.500 [INFO][4235] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3 May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.504 [INFO][4235] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.509 [INFO][4235] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.509 [INFO][4235] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" host="localhost" May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.509 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 04:57:56.540905 containerd[1496]: 2025-05-09 04:57:56.509 [INFO][4235] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" HandleID="k8s-pod-network.c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Workload="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" May 9 04:57:56.543176 containerd[1496]: 2025-05-09 04:57:56.515 [INFO][4200] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzgn4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fe86203c-44de-468e-9a1f-11db50f9ec22", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-nzgn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali305970a204a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:56.543176 containerd[1496]: 2025-05-09 04:57:56.515 [INFO][4200] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzgn4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" May 9 04:57:56.543176 containerd[1496]: 2025-05-09 04:57:56.515 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali305970a204a ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzgn4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" May 9 04:57:56.543176 containerd[1496]: 2025-05-09 04:57:56.521 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzgn4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" May 9 04:57:56.543176 containerd[1496]: 2025-05-09 04:57:56.523 [INFO][4200] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzgn4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fe86203c-44de-468e-9a1f-11db50f9ec22", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3", Pod:"coredns-7db6d8ff4d-nzgn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali305970a204a", MAC:"22:96:b8:4e:a3:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:56.543176 containerd[1496]: 2025-05-09 04:57:56.537 [INFO][4200] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzgn4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nzgn4-eth0" May 9 04:57:56.570245 systemd[1]: Started cri-containerd-2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2.scope - libcontainer container 2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2. May 9 04:57:56.576086 systemd-networkd[1402]: calie9acfb6f371: Link UP May 9 04:57:56.579879 systemd-networkd[1402]: calie9acfb6f371: Gained carrier May 9 04:57:56.586888 containerd[1496]: time="2025-05-09T04:57:56.585431986Z" level=info msg="connecting to shim c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3" address="unix:///run/containerd/s/8e61ebf50ce46de1561a8484248ca0bc554ee80ae68f5e2c67014700b512105e" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.385 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0 calico-apiserver-85b8bfbd84- calico-apiserver 14c5fd8f-4443-4b4f-a1a7-03b39d4ec063 674 0 2025-05-09 04:57:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85b8bfbd84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85b8bfbd84-j6xc6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie9acfb6f371 [] []}} ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-j6xc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.385 [INFO][4206] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-j6xc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.449 [INFO][4237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" HandleID="k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Workload="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.473 [INFO][4237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" HandleID="k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Workload="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059baa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85b8bfbd84-j6xc6", "timestamp":"2025-05-09 04:57:56.449709841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.473 [INFO][4237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.509 [INFO][4237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.510 [INFO][4237] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.513 [INFO][4237] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.521 [INFO][4237] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.530 [INFO][4237] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.536 [INFO][4237] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.540 [INFO][4237] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.540 [INFO][4237] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.542 [INFO][4237] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.547 [INFO][4237] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.559 [INFO][4237] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.559 [INFO][4237] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" host="localhost" May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.559 [INFO][4237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 04:57:56.597142 containerd[1496]: 2025-05-09 04:57:56.559 [INFO][4237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" HandleID="k8s-pod-network.5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Workload="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" May 9 04:57:56.599219 containerd[1496]: 2025-05-09 04:57:56.565 [INFO][4206] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-j6xc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0", GenerateName:"calico-apiserver-85b8bfbd84-", Namespace:"calico-apiserver", SelfLink:"", UID:"14c5fd8f-4443-4b4f-a1a7-03b39d4ec063", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85b8bfbd84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85b8bfbd84-j6xc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9acfb6f371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:56.599219 containerd[1496]: 2025-05-09 04:57:56.565 [INFO][4206] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-j6xc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" May 9 04:57:56.599219 containerd[1496]: 2025-05-09 04:57:56.565 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9acfb6f371 ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-j6xc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" May 9 04:57:56.599219 containerd[1496]: 2025-05-09 04:57:56.578 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-j6xc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" May 9 04:57:56.599219 containerd[1496]: 2025-05-09 04:57:56.578 [INFO][4206] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-j6xc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0", GenerateName:"calico-apiserver-85b8bfbd84-", Namespace:"calico-apiserver", SelfLink:"", UID:"14c5fd8f-4443-4b4f-a1a7-03b39d4ec063", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85b8bfbd84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd", Pod:"calico-apiserver-85b8bfbd84-j6xc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9acfb6f371", MAC:"86:3b:4e:a8:ee:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:56.599219 containerd[1496]: 2025-05-09 04:57:56.593 [INFO][4206] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-j6xc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--j6xc6-eth0" May 9 04:57:56.618396 systemd[1]: Started cri-containerd-c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3.scope - libcontainer container c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3. May 9 04:57:56.638366 containerd[1496]: time="2025-05-09T04:57:56.638323771Z" level=info msg="StartContainer for \"2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2\" returns successfully" May 9 04:57:56.645242 sshd[4229]: Connection closed by 10.0.0.1 port 45000 May 9 04:57:56.646084 sshd-session[4197]: pam_unix(sshd:session): session closed for user core May 9 04:57:56.649459 containerd[1496]: time="2025-05-09T04:57:56.646283733Z" level=info msg="connecting to shim 5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd" address="unix:///run/containerd/s/cfba247fb51821922981bba36c148913f88b1f4bfed73cef89035d54774ccbdd" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:56.654954 systemd[1]: sshd@11-10.0.0.63:22-10.0.0.1:45000.service: Deactivated successfully. May 9 04:57:56.657881 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 04:57:56.658931 systemd[1]: session-12.scope: Deactivated successfully. May 9 04:57:56.660974 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. May 9 04:57:56.665613 systemd-logind[1470]: Removed session 12. May 9 04:57:56.688423 systemd[1]: Started cri-containerd-5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd.scope - libcontainer container 5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd. May 9 04:57:56.695997 containerd[1496]: time="2025-05-09T04:57:56.695952298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzgn4,Uid:fe86203c-44de-468e-9a1f-11db50f9ec22,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3\"" May 9 04:57:56.699310 containerd[1496]: time="2025-05-09T04:57:56.698774606Z" level=info msg="CreateContainer within sandbox \"c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 04:57:56.705325 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 04:57:56.733764 containerd[1496]: time="2025-05-09T04:57:56.733666059Z" level=info msg="Container 3f11c2108a3201df19599c914c46a42333492468d7a940d73565697c7e53db46: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:56.739634 containerd[1496]: time="2025-05-09T04:57:56.739594297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b8bfbd84-j6xc6,Uid:14c5fd8f-4443-4b4f-a1a7-03b39d4ec063,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd\"" May 9 04:57:56.746424 containerd[1496]: time="2025-05-09T04:57:56.742289675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 9 04:57:56.746424 containerd[1496]: time="2025-05-09T04:57:56.743405965Z" level=info msg="CreateContainer within sandbox \"c7cdaa4ba522d5dde82e0e0c66e3ff8fbe17cfb4954b4407caca8753324527b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f11c2108a3201df19599c914c46a42333492468d7a940d73565697c7e53db46\"" May 9 04:57:56.746424 containerd[1496]: time="2025-05-09T04:57:56.744004453Z" level=info msg="StartContainer for \"3f11c2108a3201df19599c914c46a42333492468d7a940d73565697c7e53db46\"" May 9 04:57:56.746424 containerd[1496]: time="2025-05-09T04:57:56.746397366Z" level=info msg="connecting to shim 3f11c2108a3201df19599c914c46a42333492468d7a940d73565697c7e53db46" address="unix:///run/containerd/s/8e61ebf50ce46de1561a8484248ca0bc554ee80ae68f5e2c67014700b512105e" protocol=ttrpc version=3 May 9 04:57:56.787446 systemd[1]: Started cri-containerd-3f11c2108a3201df19599c914c46a42333492468d7a940d73565697c7e53db46.scope - libcontainer container 3f11c2108a3201df19599c914c46a42333492468d7a940d73565697c7e53db46. May 9 04:57:56.814746 containerd[1496]: time="2025-05-09T04:57:56.814162071Z" level=info msg="StartContainer for \"3f11c2108a3201df19599c914c46a42333492468d7a940d73565697c7e53db46\" returns successfully" May 9 04:57:57.296783 containerd[1496]: time="2025-05-09T04:57:57.296498960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bbccr,Uid:7d231f9b-cbef-416a-93ac-f825fa0ec566,Namespace:calico-system,Attempt:0,}" May 9 04:57:57.412401 systemd-networkd[1402]: calif6769d51141: Link UP May 9 04:57:57.413728 systemd-networkd[1402]: calif6769d51141: Gained carrier May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.332 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bbccr-eth0 csi-node-driver- calico-system 7d231f9b-cbef-416a-93ac-f825fa0ec566 604 0 2025-05-09 04:57:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bbccr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif6769d51141 [] []}} ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Namespace="calico-system" Pod="csi-node-driver-bbccr" WorkloadEndpoint="localhost-k8s-csi--node--driver--bbccr-" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.332 [INFO][4451] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Namespace="calico-system" Pod="csi-node-driver-bbccr" WorkloadEndpoint="localhost-k8s-csi--node--driver--bbccr-eth0" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.360 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" HandleID="k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Workload="localhost-k8s-csi--node--driver--bbccr-eth0" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.372 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" HandleID="k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Workload="localhost-k8s-csi--node--driver--bbccr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001362b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bbccr", "timestamp":"2025-05-09 04:57:57.360064555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.373 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.373 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.373 [INFO][4466] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.375 [INFO][4466] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.379 [INFO][4466] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.384 [INFO][4466] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.386 [INFO][4466] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.388 [INFO][4466] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.388 [INFO][4466] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.390 [INFO][4466] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6 May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.395 [INFO][4466] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.407 [INFO][4466] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.407 [INFO][4466] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" host="localhost" May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.407 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 04:57:57.428380 containerd[1496]: 2025-05-09 04:57:57.407 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" HandleID="k8s-pod-network.6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Workload="localhost-k8s-csi--node--driver--bbccr-eth0" May 9 04:57:57.429331 containerd[1496]: 2025-05-09 04:57:57.409 [INFO][4451] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Namespace="calico-system" Pod="csi-node-driver-bbccr" WorkloadEndpoint="localhost-k8s-csi--node--driver--bbccr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bbccr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d231f9b-cbef-416a-93ac-f825fa0ec566", ResourceVersion:"604", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bbccr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6769d51141", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:57.429331 containerd[1496]: 2025-05-09 04:57:57.409 [INFO][4451] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Namespace="calico-system" Pod="csi-node-driver-bbccr" WorkloadEndpoint="localhost-k8s-csi--node--driver--bbccr-eth0" May 9 04:57:57.429331 containerd[1496]: 2025-05-09 04:57:57.409 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6769d51141 ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Namespace="calico-system" Pod="csi-node-driver-bbccr" WorkloadEndpoint="localhost-k8s-csi--node--driver--bbccr-eth0" May 9 04:57:57.429331 containerd[1496]: 2025-05-09 04:57:57.413 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Namespace="calico-system" Pod="csi-node-driver-bbccr" WorkloadEndpoint="localhost-k8s-csi--node--driver--bbccr-eth0" May 9 04:57:57.429331 containerd[1496]: 2025-05-09 04:57:57.414 [INFO][4451] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Namespace="calico-system" Pod="csi-node-driver-bbccr" WorkloadEndpoint="localhost-k8s-csi--node--driver--bbccr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bbccr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d231f9b-cbef-416a-93ac-f825fa0ec566", ResourceVersion:"604", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6", Pod:"csi-node-driver-bbccr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6769d51141", MAC:"c2:42:85:4c:af:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:57.429331 containerd[1496]: 2025-05-09 04:57:57.426 [INFO][4451] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" Namespace="calico-system" Pod="csi-node-driver-bbccr" WorkloadEndpoint="localhost-k8s-csi--node--driver--bbccr-eth0" May 9 04:57:57.450765 containerd[1496]: time="2025-05-09T04:57:57.450697918Z" level=info msg="connecting to shim 6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6" address="unix:///run/containerd/s/3728bbddb59b9018355818e74e5a7ce1da00955738129e11e171a88ce5bdc142" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:57.475276 systemd[1]: Started cri-containerd-6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6.scope - libcontainer container 6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6. May 9 04:57:57.492450 kubelet[2706]: I0509 04:57:57.492390 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7959888855-znzll" podStartSLOduration=21.762526817 podStartE2EDuration="23.492370753s" podCreationTimestamp="2025-05-09 04:57:34 +0000 UTC" firstStartedPulling="2025-05-09 04:57:54.743318316 +0000 UTC m=+40.531934183" lastFinishedPulling="2025-05-09 04:57:56.473162252 +0000 UTC m=+42.261778119" observedRunningTime="2025-05-09 04:57:57.491843871 +0000 UTC m=+43.280459738" watchObservedRunningTime="2025-05-09 04:57:57.492370753 +0000 UTC m=+43.280986620" May 9 04:57:57.501709 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 04:57:57.527061 kubelet[2706]: I0509 04:57:57.526976 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nzgn4" podStartSLOduration=29.526961351 podStartE2EDuration="29.526961351s" podCreationTimestamp="2025-05-09 04:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:57:57.526289858 +0000 UTC m=+43.314905805" watchObservedRunningTime="2025-05-09 04:57:57.526961351 +0000 UTC m=+43.315577218" May 9 04:57:57.538138 containerd[1496]: time="2025-05-09T04:57:57.538090345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bbccr,Uid:7d231f9b-cbef-416a-93ac-f825fa0ec566,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6\"" May 9 04:57:57.628092 containerd[1496]: time="2025-05-09T04:57:57.627423206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2\" id:\"1a78daffecf09da023580ccc6f1604f207ade3e408fe8d111ed9848fcb155229\" pid:4559 exited_at:{seconds:1746766677 nanos:627094900}" May 9 04:57:57.725438 systemd-networkd[1402]: calie9acfb6f371: Gained IPv6LL May 9 04:57:58.109540 systemd-networkd[1402]: cali305970a204a: Gained IPv6LL May 9 04:57:58.631255 containerd[1496]: time="2025-05-09T04:57:58.631186714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:58.632607 containerd[1496]: time="2025-05-09T04:57:58.632583861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 9 04:57:58.633552 containerd[1496]: time="2025-05-09T04:57:58.633517093Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:58.638839 containerd[1496]: time="2025-05-09T04:57:58.638724532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:58.646306 containerd[1496]: time="2025-05-09T04:57:58.639604760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.897277722s" May 9 04:57:58.646429 containerd[1496]: time="2025-05-09T04:57:58.646312474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 9 04:57:58.647338 containerd[1496]: time="2025-05-09T04:57:58.647297869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 9 04:57:58.650276 containerd[1496]: time="2025-05-09T04:57:58.650237535Z" level=info msg="CreateContainer within sandbox \"5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 9 04:57:58.655994 containerd[1496]: time="2025-05-09T04:57:58.654677835Z" level=info msg="Container 2ae93d5f53b2d48fc9398a4344e29e50b5a81a612f8b0d1004246bb5a2b86f6d: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:58.663661 containerd[1496]: time="2025-05-09T04:57:58.663619801Z" level=info msg="CreateContainer within sandbox \"5f6c466d6454baf589238a94ad7cc24816bacc927c3bc69f083130f6f1a881cd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ae93d5f53b2d48fc9398a4344e29e50b5a81a612f8b0d1004246bb5a2b86f6d\"" May 9 04:57:58.664171 containerd[1496]: time="2025-05-09T04:57:58.664111998Z" level=info msg="StartContainer for \"2ae93d5f53b2d48fc9398a4344e29e50b5a81a612f8b0d1004246bb5a2b86f6d\"" May 9 04:57:58.665473 containerd[1496]: time="2025-05-09T04:57:58.665436500Z" level=info msg="connecting to shim 2ae93d5f53b2d48fc9398a4344e29e50b5a81a612f8b0d1004246bb5a2b86f6d" address="unix:///run/containerd/s/cfba247fb51821922981bba36c148913f88b1f4bfed73cef89035d54774ccbdd" protocol=ttrpc version=3 May 9 04:57:58.683365 systemd[1]: Started cri-containerd-2ae93d5f53b2d48fc9398a4344e29e50b5a81a612f8b0d1004246bb5a2b86f6d.scope - libcontainer container 2ae93d5f53b2d48fc9398a4344e29e50b5a81a612f8b0d1004246bb5a2b86f6d. May 9 04:57:58.686350 systemd-networkd[1402]: calif6769d51141: Gained IPv6LL May 9 04:57:58.734172 containerd[1496]: time="2025-05-09T04:57:58.734051920Z" level=info msg="StartContainer for \"2ae93d5f53b2d48fc9398a4344e29e50b5a81a612f8b0d1004246bb5a2b86f6d\" returns successfully" May 9 04:57:59.296658 containerd[1496]: time="2025-05-09T04:57:59.296606751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b8bfbd84-rt7bb,Uid:e14efcfa-b02b-4469-8e9c-9cad29d3a7b6,Namespace:calico-apiserver,Attempt:0,}" May 9 04:57:59.417496 systemd-networkd[1402]: cali5175a6053ad: Link UP May 9 04:57:59.417648 systemd-networkd[1402]: cali5175a6053ad: Gained carrier May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.334 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0 calico-apiserver-85b8bfbd84- calico-apiserver e14efcfa-b02b-4469-8e9c-9cad29d3a7b6 677 0 2025-05-09 04:57:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85b8bfbd84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85b8bfbd84-rt7bb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5175a6053ad [] []}} ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-rt7bb" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.334 [INFO][4616] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-rt7bb" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.368 [INFO][4631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" HandleID="k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Workload="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.381 [INFO][4631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" HandleID="k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Workload="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003055e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85b8bfbd84-rt7bb", "timestamp":"2025-05-09 04:57:59.368479891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.381 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.381 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.381 [INFO][4631] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.383 [INFO][4631] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.388 [INFO][4631] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.393 [INFO][4631] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.395 [INFO][4631] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.397 [INFO][4631] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.397 [INFO][4631] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.399 [INFO][4631] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8 May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.405 [INFO][4631] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.411 [INFO][4631] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.411 [INFO][4631] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" host="localhost" May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.411 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 04:57:59.429912 containerd[1496]: 2025-05-09 04:57:59.411 [INFO][4631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" HandleID="k8s-pod-network.018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Workload="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" May 9 04:57:59.431720 containerd[1496]: 2025-05-09 04:57:59.414 [INFO][4616] cni-plugin/k8s.go 386: Populated endpoint ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-rt7bb" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0", GenerateName:"calico-apiserver-85b8bfbd84-", Namespace:"calico-apiserver", SelfLink:"", UID:"e14efcfa-b02b-4469-8e9c-9cad29d3a7b6", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85b8bfbd84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85b8bfbd84-rt7bb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5175a6053ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:59.431720 containerd[1496]: 2025-05-09 04:57:59.414 [INFO][4616] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-rt7bb" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" May 9 04:57:59.431720 containerd[1496]: 2025-05-09 04:57:59.414 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5175a6053ad ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-rt7bb" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" May 9 04:57:59.431720 containerd[1496]: 2025-05-09 04:57:59.416 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-rt7bb" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" May 9 04:57:59.431720 containerd[1496]: 2025-05-09 04:57:59.418 [INFO][4616] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-rt7bb" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0", GenerateName:"calico-apiserver-85b8bfbd84-", Namespace:"calico-apiserver", SelfLink:"", UID:"e14efcfa-b02b-4469-8e9c-9cad29d3a7b6", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 4, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85b8bfbd84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8", Pod:"calico-apiserver-85b8bfbd84-rt7bb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5175a6053ad", MAC:"ca:0b:f7:e0:af:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 04:57:59.431720 containerd[1496]: 2025-05-09 04:57:59.427 [INFO][4616] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" Namespace="calico-apiserver" Pod="calico-apiserver-85b8bfbd84-rt7bb" WorkloadEndpoint="localhost-k8s-calico--apiserver--85b8bfbd84--rt7bb-eth0" May 9 04:57:59.456129 containerd[1496]: time="2025-05-09T04:57:59.455657896Z" level=info msg="connecting to shim 018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8" address="unix:///run/containerd/s/9cfef71ba14448e2c7c766f2a8303a4991ab1e3c88b9aaff9696ebdf8bc47991" namespace=k8s.io protocol=ttrpc version=3 May 9 04:57:59.484433 systemd[1]: Started cri-containerd-018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8.scope - libcontainer container 018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8. May 9 04:57:59.520212 kubelet[2706]: I0509 04:57:59.520102 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85b8bfbd84-j6xc6" podStartSLOduration=23.61466085 podStartE2EDuration="25.520084358s" podCreationTimestamp="2025-05-09 04:57:34 +0000 UTC" firstStartedPulling="2025-05-09 04:57:56.741717269 +0000 UTC m=+42.530333136" lastFinishedPulling="2025-05-09 04:57:58.647140777 +0000 UTC m=+44.435756644" observedRunningTime="2025-05-09 04:57:59.51916533 +0000 UTC m=+45.307781197" watchObservedRunningTime="2025-05-09 04:57:59.520084358 +0000 UTC m=+45.308700185" May 9 04:57:59.529472 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 04:57:59.561418 containerd[1496]: time="2025-05-09T04:57:59.561309204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85b8bfbd84-rt7bb,Uid:e14efcfa-b02b-4469-8e9c-9cad29d3a7b6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8\"" May 9 04:57:59.565282 containerd[1496]: time="2025-05-09T04:57:59.565247619Z" level=info msg="CreateContainer within sandbox \"018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 9 04:57:59.576424 containerd[1496]: time="2025-05-09T04:57:59.576375852Z" level=info msg="Container 97c09e3b70a81a155bac66cf17de710b749ac973a56b2ec4cd52332312d1ee39: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:59.584460 containerd[1496]: time="2025-05-09T04:57:59.584420254Z" level=info msg="CreateContainer within sandbox \"018967ca832a1a905e98020caa7b9dd75b4d29fff69b5151e7788de3ce8f8ca8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"97c09e3b70a81a155bac66cf17de710b749ac973a56b2ec4cd52332312d1ee39\"" May 9 04:57:59.585813 containerd[1496]: time="2025-05-09T04:57:59.585790397Z" level=info msg="StartContainer for \"97c09e3b70a81a155bac66cf17de710b749ac973a56b2ec4cd52332312d1ee39\"" May 9 04:57:59.587340 containerd[1496]: time="2025-05-09T04:57:59.587310870Z" level=info msg="connecting to shim 97c09e3b70a81a155bac66cf17de710b749ac973a56b2ec4cd52332312d1ee39" address="unix:///run/containerd/s/9cfef71ba14448e2c7c766f2a8303a4991ab1e3c88b9aaff9696ebdf8bc47991" protocol=ttrpc version=3 May 9 04:57:59.609465 systemd[1]: Started cri-containerd-97c09e3b70a81a155bac66cf17de710b749ac973a56b2ec4cd52332312d1ee39.scope - libcontainer container 97c09e3b70a81a155bac66cf17de710b749ac973a56b2ec4cd52332312d1ee39. May 9 04:57:59.644464 containerd[1496]: time="2025-05-09T04:57:59.644427346Z" level=info msg="StartContainer for \"97c09e3b70a81a155bac66cf17de710b749ac973a56b2ec4cd52332312d1ee39\" returns successfully" May 9 04:57:59.820496 containerd[1496]: time="2025-05-09T04:57:59.819644141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:59.820496 containerd[1496]: time="2025-05-09T04:57:59.820286789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 9 04:57:59.821017 containerd[1496]: time="2025-05-09T04:57:59.820989722Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:59.825804 containerd[1496]: time="2025-05-09T04:57:59.825761199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:57:59.827247 containerd[1496]: time="2025-05-09T04:57:59.827181265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.179845633s" May 9 04:57:59.827354 containerd[1496]: time="2025-05-09T04:57:59.827338157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 9 04:57:59.830823 containerd[1496]: time="2025-05-09T04:57:59.830786615Z" level=info msg="CreateContainer within sandbox \"6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 9 04:57:59.861221 containerd[1496]: time="2025-05-09T04:57:59.858241990Z" level=info msg="Container 588e99f52270c0a3763467fb4aad8e04608d11f92d07b95dfbb1adea9863b6ab: CDI devices from CRI Config.CDIDevices: []" May 9 04:57:59.980191 containerd[1496]: time="2025-05-09T04:57:59.980140915Z" level=info msg="CreateContainer within sandbox \"6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"588e99f52270c0a3763467fb4aad8e04608d11f92d07b95dfbb1adea9863b6ab\"" May 9 04:57:59.981614 containerd[1496]: time="2025-05-09T04:57:59.981583223Z" level=info msg="StartContainer for \"588e99f52270c0a3763467fb4aad8e04608d11f92d07b95dfbb1adea9863b6ab\"" May 9 04:57:59.984297 containerd[1496]: time="2025-05-09T04:57:59.984268504Z" level=info msg="connecting to shim 588e99f52270c0a3763467fb4aad8e04608d11f92d07b95dfbb1adea9863b6ab" address="unix:///run/containerd/s/3728bbddb59b9018355818e74e5a7ce1da00955738129e11e171a88ce5bdc142" protocol=ttrpc version=3 May 9 04:58:00.009337 systemd[1]: Started cri-containerd-588e99f52270c0a3763467fb4aad8e04608d11f92d07b95dfbb1adea9863b6ab.scope - libcontainer container 588e99f52270c0a3763467fb4aad8e04608d11f92d07b95dfbb1adea9863b6ab. May 9 04:58:00.048935 containerd[1496]: time="2025-05-09T04:58:00.048891220Z" level=info msg="StartContainer for \"588e99f52270c0a3763467fb4aad8e04608d11f92d07b95dfbb1adea9863b6ab\" returns successfully" May 9 04:58:00.051821 containerd[1496]: time="2025-05-09T04:58:00.051408484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 9 04:58:00.535908 kubelet[2706]: I0509 04:58:00.535015 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85b8bfbd84-rt7bb" podStartSLOduration=26.534999303 podStartE2EDuration="26.534999303s" podCreationTimestamp="2025-05-09 04:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:58:00.5344061 +0000 UTC m=+46.323021967" watchObservedRunningTime="2025-05-09 04:58:00.534999303 +0000 UTC m=+46.323615170" May 9 04:58:01.181322 systemd-networkd[1402]: cali5175a6053ad: Gained IPv6LL May 9 04:58:01.311821 containerd[1496]: time="2025-05-09T04:58:01.311035905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:58:01.311821 containerd[1496]: time="2025-05-09T04:58:01.311471776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 9 04:58:01.312527 containerd[1496]: time="2025-05-09T04:58:01.312491609Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:58:01.330451 containerd[1496]: time="2025-05-09T04:58:01.330404851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:58:01.331257 containerd[1496]: time="2025-05-09T04:58:01.331224790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.279780543s" May 9 04:58:01.331257 containerd[1496]: time="2025-05-09T04:58:01.331259392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 9 04:58:01.336367 containerd[1496]: time="2025-05-09T04:58:01.335360606Z" level=info msg="CreateContainer within sandbox \"6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 9 04:58:01.483074 containerd[1496]: time="2025-05-09T04:58:01.482963530Z" level=info msg="Container 3fab925a179a0d4c0ee7aedd396d4c81b3eb19ad0ecf051408c41404d9f9ffb2: CDI devices from CRI Config.CDIDevices: []" May 9 04:58:01.513427 kubelet[2706]: I0509 04:58:01.513385 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 04:58:01.635131 containerd[1496]: time="2025-05-09T04:58:01.635007012Z" level=info msg="CreateContainer within sandbox \"6b3a1b356c9e6e36b5e93c119bb9b67c032b26b1284553e4662cef6d2ccda2e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3fab925a179a0d4c0ee7aedd396d4c81b3eb19ad0ecf051408c41404d9f9ffb2\"" May 9 04:58:01.635644 containerd[1496]: time="2025-05-09T04:58:01.635618535Z" level=info msg="StartContainer for \"3fab925a179a0d4c0ee7aedd396d4c81b3eb19ad0ecf051408c41404d9f9ffb2\"" May 9 04:58:01.637191 containerd[1496]: time="2025-05-09T04:58:01.637147685Z" level=info msg="connecting to shim 3fab925a179a0d4c0ee7aedd396d4c81b3eb19ad0ecf051408c41404d9f9ffb2" address="unix:///run/containerd/s/3728bbddb59b9018355818e74e5a7ce1da00955738129e11e171a88ce5bdc142" protocol=ttrpc version=3 May 9 04:58:01.665245 systemd[1]: Started cri-containerd-3fab925a179a0d4c0ee7aedd396d4c81b3eb19ad0ecf051408c41404d9f9ffb2.scope - libcontainer container 3fab925a179a0d4c0ee7aedd396d4c81b3eb19ad0ecf051408c41404d9f9ffb2. May 9 04:58:01.667133 systemd[1]: Started sshd@12-10.0.0.63:22-10.0.0.1:45014.service - OpenSSH per-connection server daemon (10.0.0.1:45014). May 9 04:58:01.703476 containerd[1496]: time="2025-05-09T04:58:01.703423708Z" level=info msg="StartContainer for \"3fab925a179a0d4c0ee7aedd396d4c81b3eb19ad0ecf051408c41404d9f9ffb2\" returns successfully" May 9 04:58:01.744153 sshd[4791]: Accepted publickey for core from 10.0.0.1 port 45014 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:01.745701 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:01.749880 systemd-logind[1470]: New session 13 of user core. May 9 04:58:01.756352 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 04:58:01.842361 kubelet[2706]: I0509 04:58:01.842318 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 04:58:01.912144 containerd[1496]: time="2025-05-09T04:58:01.912049760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c\" id:\"f18332e6f32106763e8bc18a5d3f28a199c163d57147e9613e7ff15c44bf0770\" pid:4837 exit_status:1 exited_at:{seconds:1746766681 nanos:911156216}" May 9 04:58:01.955785 sshd[4816]: Connection closed by 10.0.0.1 port 45014 May 9 04:58:01.956116 sshd-session[4791]: pam_unix(sshd:session): session closed for user core May 9 04:58:01.960276 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. May 9 04:58:01.960578 systemd[1]: sshd@12-10.0.0.63:22-10.0.0.1:45014.service: Deactivated successfully. May 9 04:58:01.964635 systemd[1]: session-13.scope: Deactivated successfully. May 9 04:58:01.965703 systemd-logind[1470]: Removed session 13. May 9 04:58:01.984722 containerd[1496]: time="2025-05-09T04:58:01.984678038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c\" id:\"18183d06be10ba41769caaeb1499596cd11e3e2e7baa6da86323f6fe2afb66ad\" pid:4866 exit_status:1 exited_at:{seconds:1746766681 nanos:984353455}" May 9 04:58:02.379969 kubelet[2706]: I0509 04:58:02.379854 2706 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 9 04:58:02.386236 kubelet[2706]: I0509 04:58:02.386188 2706 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 9 04:58:02.529750 kubelet[2706]: I0509 04:58:02.529502 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bbccr" podStartSLOduration=24.736828407 podStartE2EDuration="28.529486844s" podCreationTimestamp="2025-05-09 04:57:34 +0000 UTC" firstStartedPulling="2025-05-09 04:57:57.540024337 +0000 UTC m=+43.328640204" lastFinishedPulling="2025-05-09 04:58:01.332682774 +0000 UTC m=+47.121298641" observedRunningTime="2025-05-09 04:58:02.527981419 +0000 UTC m=+48.316597246" watchObservedRunningTime="2025-05-09 04:58:02.529486844 +0000 UTC m=+48.318102711" May 9 04:58:06.972541 systemd[1]: Started sshd@13-10.0.0.63:22-10.0.0.1:36848.service - OpenSSH per-connection server daemon (10.0.0.1:36848). May 9 04:58:07.036279 sshd[4887]: Accepted publickey for core from 10.0.0.1 port 36848 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:07.037549 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:07.044707 systemd-logind[1470]: New session 14 of user core. May 9 04:58:07.050687 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 04:58:07.235917 sshd[4889]: Connection closed by 10.0.0.1 port 36848 May 9 04:58:07.236412 sshd-session[4887]: pam_unix(sshd:session): session closed for user core May 9 04:58:07.240159 systemd[1]: sshd@13-10.0.0.63:22-10.0.0.1:36848.service: Deactivated successfully. May 9 04:58:07.241980 systemd[1]: session-14.scope: Deactivated successfully. May 9 04:58:07.242953 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. May 9 04:58:07.243843 systemd-logind[1470]: Removed session 14. May 9 04:58:12.251539 systemd[1]: Started sshd@14-10.0.0.63:22-10.0.0.1:36862.service - OpenSSH per-connection server daemon (10.0.0.1:36862). May 9 04:58:12.307004 sshd[4911]: Accepted publickey for core from 10.0.0.1 port 36862 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:12.308106 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:12.313148 systemd-logind[1470]: New session 15 of user core. May 9 04:58:12.322351 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 04:58:12.440048 sshd[4913]: Connection closed by 10.0.0.1 port 36862 May 9 04:58:12.440569 sshd-session[4911]: pam_unix(sshd:session): session closed for user core May 9 04:58:12.443877 systemd[1]: sshd@14-10.0.0.63:22-10.0.0.1:36862.service: Deactivated successfully. May 9 04:58:12.445675 systemd[1]: session-15.scope: Deactivated successfully. May 9 04:58:12.446553 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. May 9 04:58:12.448310 systemd-logind[1470]: Removed session 15. May 9 04:58:16.618467 kubelet[2706]: I0509 04:58:16.618372 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 04:58:17.456304 systemd[1]: Started sshd@15-10.0.0.63:22-10.0.0.1:51510.service - OpenSSH per-connection server daemon (10.0.0.1:51510). May 9 04:58:17.504473 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 51510 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:17.505735 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:17.510105 systemd-logind[1470]: New session 16 of user core. May 9 04:58:17.535983 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 04:58:17.687578 sshd[4933]: Connection closed by 10.0.0.1 port 51510 May 9 04:58:17.688085 sshd-session[4931]: pam_unix(sshd:session): session closed for user core May 9 04:58:17.693281 systemd[1]: sshd@15-10.0.0.63:22-10.0.0.1:51510.service: Deactivated successfully. May 9 04:58:17.695024 systemd[1]: session-16.scope: Deactivated successfully. May 9 04:58:17.696433 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. May 9 04:58:17.697827 systemd-logind[1470]: Removed session 16. May 9 04:58:22.699478 systemd[1]: Started sshd@16-10.0.0.63:22-10.0.0.1:56638.service - OpenSSH per-connection server daemon (10.0.0.1:56638). May 9 04:58:22.782292 sshd[4948]: Accepted publickey for core from 10.0.0.1 port 56638 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:22.783593 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:22.787715 systemd-logind[1470]: New session 17 of user core. May 9 04:58:22.798373 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 04:58:22.940269 sshd[4950]: Connection closed by 10.0.0.1 port 56638 May 9 04:58:22.940763 sshd-session[4948]: pam_unix(sshd:session): session closed for user core May 9 04:58:22.951828 systemd[1]: sshd@16-10.0.0.63:22-10.0.0.1:56638.service: Deactivated successfully. May 9 04:58:22.954568 systemd[1]: session-17.scope: Deactivated successfully. May 9 04:58:22.957178 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. May 9 04:58:22.960138 systemd[1]: Started sshd@17-10.0.0.63:22-10.0.0.1:56648.service - OpenSSH per-connection server daemon (10.0.0.1:56648). May 9 04:58:22.961255 systemd-logind[1470]: Removed session 17. May 9 04:58:23.017348 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 56648 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:23.018661 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:23.023177 systemd-logind[1470]: New session 18 of user core. May 9 04:58:23.037375 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 04:58:23.239465 sshd[4966]: Connection closed by 10.0.0.1 port 56648 May 9 04:58:23.240189 sshd-session[4963]: pam_unix(sshd:session): session closed for user core May 9 04:58:23.248508 systemd[1]: sshd@17-10.0.0.63:22-10.0.0.1:56648.service: Deactivated successfully. May 9 04:58:23.250694 systemd[1]: session-18.scope: Deactivated successfully. May 9 04:58:23.251877 systemd-logind[1470]: Session 18 logged out. Waiting for processes to exit. May 9 04:58:23.254522 systemd[1]: Started sshd@18-10.0.0.63:22-10.0.0.1:56662.service - OpenSSH per-connection server daemon (10.0.0.1:56662). May 9 04:58:23.255639 systemd-logind[1470]: Removed session 18. May 9 04:58:23.311251 sshd[4976]: Accepted publickey for core from 10.0.0.1 port 56662 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:23.312620 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:23.317256 systemd-logind[1470]: New session 19 of user core. May 9 04:58:23.321334 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 04:58:23.727126 containerd[1496]: time="2025-05-09T04:58:23.727051301Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2\" id:\"6c8e832067d0f8955426f490b29a57136b06ea5f02280fb6c3509919f9e01091\" pid:4998 exited_at:{seconds:1746766703 nanos:726847431}" May 9 04:58:24.807546 sshd[4979]: Connection closed by 10.0.0.1 port 56662 May 9 04:58:24.808138 sshd-session[4976]: pam_unix(sshd:session): session closed for user core May 9 04:58:24.816520 systemd[1]: sshd@18-10.0.0.63:22-10.0.0.1:56662.service: Deactivated successfully. May 9 04:58:24.819056 systemd[1]: session-19.scope: Deactivated successfully. May 9 04:58:24.819396 systemd[1]: session-19.scope: Consumed 513ms CPU time, 66.4M memory peak. May 9 04:58:24.820104 systemd-logind[1470]: Session 19 logged out. Waiting for processes to exit. May 9 04:58:24.822357 systemd[1]: Started sshd@19-10.0.0.63:22-10.0.0.1:56676.service - OpenSSH per-connection server daemon (10.0.0.1:56676). May 9 04:58:24.827664 systemd-logind[1470]: Removed session 19. May 9 04:58:24.891072 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 56676 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:24.892422 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:24.897566 systemd-logind[1470]: New session 20 of user core. May 9 04:58:24.907353 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 04:58:25.301133 sshd[5025]: Connection closed by 10.0.0.1 port 56676 May 9 04:58:25.302549 sshd-session[5021]: pam_unix(sshd:session): session closed for user core May 9 04:58:25.311998 systemd[1]: sshd@19-10.0.0.63:22-10.0.0.1:56676.service: Deactivated successfully. May 9 04:58:25.313691 systemd[1]: session-20.scope: Deactivated successfully. May 9 04:58:25.314723 systemd-logind[1470]: Session 20 logged out. Waiting for processes to exit. May 9 04:58:25.317035 systemd[1]: Started sshd@20-10.0.0.63:22-10.0.0.1:56680.service - OpenSSH per-connection server daemon (10.0.0.1:56680). May 9 04:58:25.318658 systemd-logind[1470]: Removed session 20. May 9 04:58:25.370339 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 56680 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:25.371673 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:25.377370 systemd-logind[1470]: New session 21 of user core. May 9 04:58:25.386402 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 04:58:25.523260 sshd[5039]: Connection closed by 10.0.0.1 port 56680 May 9 04:58:25.523103 sshd-session[5036]: pam_unix(sshd:session): session closed for user core May 9 04:58:25.526562 systemd-logind[1470]: Session 21 logged out. Waiting for processes to exit. May 9 04:58:25.526810 systemd[1]: sshd@20-10.0.0.63:22-10.0.0.1:56680.service: Deactivated successfully. May 9 04:58:25.529782 systemd[1]: session-21.scope: Deactivated successfully. May 9 04:58:25.531456 systemd-logind[1470]: Removed session 21. May 9 04:58:26.359424 containerd[1496]: time="2025-05-09T04:58:26.359377489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f19713646917e764626134b31e1fdd0e968a02aba4dbeb5a833b478bd7ad6e2\" id:\"3476622adb3bf5b9b30a26fcd08dd66e5a8189166a4f893d7496ee43b6fcf94e\" pid:5062 exited_at:{seconds:1746766706 nanos:359109060}" May 9 04:58:30.534622 systemd[1]: Started sshd@21-10.0.0.63:22-10.0.0.1:56690.service - OpenSSH per-connection server daemon (10.0.0.1:56690). May 9 04:58:30.591443 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 56690 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:30.592720 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:30.596430 systemd-logind[1470]: New session 22 of user core. May 9 04:58:30.604332 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 04:58:30.721500 sshd[5088]: Connection closed by 10.0.0.1 port 56690 May 9 04:58:30.721553 sshd-session[5086]: pam_unix(sshd:session): session closed for user core May 9 04:58:30.724742 systemd[1]: sshd@21-10.0.0.63:22-10.0.0.1:56690.service: Deactivated successfully. May 9 04:58:30.726403 systemd[1]: session-22.scope: Deactivated successfully. May 9 04:58:30.727111 systemd-logind[1470]: Session 22 logged out. Waiting for processes to exit. May 9 04:58:30.727920 systemd-logind[1470]: Removed session 22. May 9 04:58:31.896296 containerd[1496]: time="2025-05-09T04:58:31.896253448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"165901e584e450ec518011dd0a9327c6587d7b8e08abed441c8bdf8647bab89c\" id:\"5b285f24b0e741d3b0bc583ffc748cea4d8448731a8ecfed4e46b15dc8838831\" pid:5112 exited_at:{seconds:1746766711 nanos:895987055}" May 9 04:58:35.732485 systemd[1]: Started sshd@22-10.0.0.63:22-10.0.0.1:33198.service - OpenSSH per-connection server daemon (10.0.0.1:33198). May 9 04:58:35.794634 sshd[5128]: Accepted publickey for core from 10.0.0.1 port 33198 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:35.795925 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:35.799920 systemd-logind[1470]: New session 23 of user core. May 9 04:58:35.804402 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 04:58:35.992001 sshd[5130]: Connection closed by 10.0.0.1 port 33198 May 9 04:58:35.992297 sshd-session[5128]: pam_unix(sshd:session): session closed for user core May 9 04:58:35.996171 systemd-logind[1470]: Session 23 logged out. Waiting for processes to exit. May 9 04:58:35.996342 systemd[1]: sshd@22-10.0.0.63:22-10.0.0.1:33198.service: Deactivated successfully. May 9 04:58:35.998023 systemd[1]: session-23.scope: Deactivated successfully. May 9 04:58:35.998997 systemd-logind[1470]: Removed session 23. May 9 04:58:41.006590 systemd[1]: Started sshd@23-10.0.0.63:22-10.0.0.1:33208.service - OpenSSH per-connection server daemon (10.0.0.1:33208). May 9 04:58:41.066781 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 33208 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:58:41.068106 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:58:41.072100 systemd-logind[1470]: New session 24 of user core. May 9 04:58:41.085352 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 04:58:41.235376 sshd[5147]: Connection closed by 10.0.0.1 port 33208 May 9 04:58:41.236103 sshd-session[5145]: pam_unix(sshd:session): session closed for user core May 9 04:58:41.239804 systemd[1]: sshd@23-10.0.0.63:22-10.0.0.1:33208.service: Deactivated successfully. May 9 04:58:41.241628 systemd[1]: session-24.scope: Deactivated successfully. May 9 04:58:41.242430 systemd-logind[1470]: Session 24 logged out. Waiting for processes to exit. May 9 04:58:41.243694 systemd-logind[1470]: Removed session 24.