Mar 20 18:08:05.892780 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 20 18:08:05.892802 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Mar 20 13:18:46 -00 2025 Mar 20 18:08:05.892812 kernel: KASLR enabled Mar 20 18:08:05.892818 kernel: efi: EFI v2.7 by EDK II Mar 20 18:08:05.892824 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 20 18:08:05.892829 kernel: random: crng init done Mar 20 18:08:05.892836 kernel: secureboot: Secure boot disabled Mar 20 18:08:05.892842 kernel: ACPI: Early table checksum verification disabled Mar 20 18:08:05.892848 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 20 18:08:05.892855 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 20 18:08:05.892861 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892867 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892873 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892879 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892886 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892894 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892900 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892906 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892912 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:08:05.892918 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 20 18:08:05.892925 kernel: NUMA: Failed to initialise from firmware Mar 20 18:08:05.892931 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 18:08:05.892937 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Mar 20 18:08:05.892943 kernel: Zone ranges: Mar 20 18:08:05.892949 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 18:08:05.892957 kernel: DMA32 empty Mar 20 18:08:05.892962 kernel: Normal empty Mar 20 18:08:05.892968 kernel: Movable zone start for each node Mar 20 18:08:05.892974 kernel: Early memory node ranges Mar 20 18:08:05.892980 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 20 18:08:05.892986 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 20 18:08:05.892992 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 20 18:08:05.892998 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 20 18:08:05.893005 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 20 18:08:05.893011 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 20 18:08:05.893016 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 20 18:08:05.893023 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 20 18:08:05.893030 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 20 18:08:05.893037 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 18:08:05.893043 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 20 18:08:05.893053 kernel: psci: probing for conduit method from ACPI. Mar 20 18:08:05.893059 kernel: psci: PSCIv1.1 detected in firmware. Mar 20 18:08:05.893066 kernel: psci: Using standard PSCI v0.2 function IDs Mar 20 18:08:05.893074 kernel: psci: Trusted OS migration not required Mar 20 18:08:05.893080 kernel: psci: SMC Calling Convention v1.1 Mar 20 18:08:05.893087 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 20 18:08:05.893093 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 20 18:08:05.893105 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 20 18:08:05.893114 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 20 18:08:05.893121 kernel: Detected PIPT I-cache on CPU0 Mar 20 18:08:05.893128 kernel: CPU features: detected: GIC system register CPU interface Mar 20 18:08:05.893134 kernel: CPU features: detected: Hardware dirty bit management Mar 20 18:08:05.893140 kernel: CPU features: detected: Spectre-v4 Mar 20 18:08:05.893149 kernel: CPU features: detected: Spectre-BHB Mar 20 18:08:05.893156 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 20 18:08:05.893162 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 20 18:08:05.893169 kernel: CPU features: detected: ARM erratum 1418040 Mar 20 18:08:05.893175 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 20 18:08:05.893182 kernel: alternatives: applying boot alternatives Mar 20 18:08:05.893189 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7e8d7de7ff8626488e956fa44b1348d7cdfde9b4a90f4fdae2fb2fe94dbb7bff Mar 20 18:08:05.893196 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 18:08:05.893202 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 18:08:05.893209 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 18:08:05.893215 kernel: Fallback order for Node 0: 0 Mar 20 18:08:05.893223 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 20 18:08:05.893230 kernel: Policy zone: DMA Mar 20 18:08:05.893236 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 18:08:05.893243 kernel: software IO TLB: area num 4. Mar 20 18:08:05.893249 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 20 18:08:05.893256 kernel: Memory: 2387408K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184880K reserved, 0K cma-reserved) Mar 20 18:08:05.893263 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 18:08:05.893269 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 18:08:05.893338 kernel: rcu: RCU event tracing is enabled. Mar 20 18:08:05.893348 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 18:08:05.893354 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 18:08:05.893361 kernel: Tracing variant of Tasks RCU enabled. Mar 20 18:08:05.893371 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 18:08:05.893377 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 18:08:05.893384 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 20 18:08:05.893390 kernel: GICv3: 256 SPIs implemented Mar 20 18:08:05.893397 kernel: GICv3: 0 Extended SPIs implemented Mar 20 18:08:05.893403 kernel: Root IRQ handler: gic_handle_irq Mar 20 18:08:05.893409 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 20 18:08:05.893417 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 20 18:08:05.893424 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 20 18:08:05.893431 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 20 18:08:05.893439 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 20 18:08:05.893448 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 20 18:08:05.893456 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 20 18:08:05.893462 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 18:08:05.893469 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 18:08:05.893475 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 20 18:08:05.893482 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 20 18:08:05.893488 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 20 18:08:05.893495 kernel: arm-pv: using stolen time PV Mar 20 18:08:05.893502 kernel: Console: colour dummy device 80x25 Mar 20 18:08:05.893509 kernel: ACPI: Core revision 20230628 Mar 20 18:08:05.893516 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 20 18:08:05.893524 kernel: pid_max: default: 32768 minimum: 301 Mar 20 18:08:05.893531 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 18:08:05.893537 kernel: landlock: Up and running. Mar 20 18:08:05.893544 kernel: SELinux: Initializing. Mar 20 18:08:05.893550 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 18:08:05.893557 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 18:08:05.893564 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 18:08:05.893571 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 18:08:05.893578 kernel: rcu: Hierarchical SRCU implementation. Mar 20 18:08:05.893586 kernel: rcu: Max phase no-delay instances is 400. Mar 20 18:08:05.893593 kernel: Platform MSI: ITS@0x8080000 domain created Mar 20 18:08:05.893599 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 20 18:08:05.893606 kernel: Remapping and enabling EFI services. Mar 20 18:08:05.893613 kernel: smp: Bringing up secondary CPUs ... Mar 20 18:08:05.893619 kernel: Detected PIPT I-cache on CPU1 Mar 20 18:08:05.893626 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 20 18:08:05.893633 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 20 18:08:05.893639 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 18:08:05.893648 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 20 18:08:05.893655 kernel: Detected PIPT I-cache on CPU2 Mar 20 18:08:05.893667 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 20 18:08:05.893675 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 20 18:08:05.893682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 18:08:05.893689 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 20 18:08:05.893696 kernel: Detected PIPT I-cache on CPU3 Mar 20 18:08:05.893703 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 20 18:08:05.893711 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 20 18:08:05.893719 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 18:08:05.893726 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 20 18:08:05.893733 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 18:08:05.893740 kernel: SMP: Total of 4 processors activated. Mar 20 18:08:05.893747 kernel: CPU features: detected: 32-bit EL0 Support Mar 20 18:08:05.893754 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 20 18:08:05.893761 kernel: CPU features: detected: Common not Private translations Mar 20 18:08:05.893768 kernel: CPU features: detected: CRC32 instructions Mar 20 18:08:05.893776 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 20 18:08:05.893783 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 20 18:08:05.893790 kernel: CPU features: detected: LSE atomic instructions Mar 20 18:08:05.893797 kernel: CPU features: detected: Privileged Access Never Mar 20 18:08:05.893803 kernel: CPU features: detected: RAS Extension Support Mar 20 18:08:05.893810 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 20 18:08:05.893817 kernel: CPU: All CPU(s) started at EL1 Mar 20 18:08:05.893824 kernel: alternatives: applying system-wide alternatives Mar 20 18:08:05.893830 kernel: devtmpfs: initialized Mar 20 18:08:05.893837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 18:08:05.893846 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 18:08:05.893853 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 18:08:05.893859 kernel: SMBIOS 3.0.0 present. Mar 20 18:08:05.893866 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 20 18:08:05.893873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 18:08:05.893880 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 20 18:08:05.893887 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 20 18:08:05.893894 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 20 18:08:05.893901 kernel: audit: initializing netlink subsys (disabled) Mar 20 18:08:05.893909 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 20 18:08:05.893916 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 18:08:05.893922 kernel: cpuidle: using governor menu Mar 20 18:08:05.893929 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 20 18:08:05.893936 kernel: ASID allocator initialised with 32768 entries Mar 20 18:08:05.893943 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 18:08:05.893950 kernel: Serial: AMBA PL011 UART driver Mar 20 18:08:05.893957 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 20 18:08:05.893964 kernel: Modules: 0 pages in range for non-PLT usage Mar 20 18:08:05.893972 kernel: Modules: 509248 pages in range for PLT usage Mar 20 18:08:05.893979 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 18:08:05.893986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 18:08:05.893993 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 20 18:08:05.893999 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 20 18:08:05.894006 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 18:08:05.894013 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 18:08:05.894020 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 20 18:08:05.894028 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 20 18:08:05.894035 kernel: ACPI: Added _OSI(Module Device) Mar 20 18:08:05.894042 kernel: ACPI: Added _OSI(Processor Device) Mar 20 18:08:05.894049 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 18:08:05.894056 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 18:08:05.894062 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 18:08:05.894069 kernel: ACPI: Interpreter enabled Mar 20 18:08:05.894076 kernel: ACPI: Using GIC for interrupt routing Mar 20 18:08:05.894083 kernel: ACPI: MCFG table detected, 1 entries Mar 20 18:08:05.894090 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 20 18:08:05.894098 kernel: printk: console [ttyAMA0] enabled Mar 20 18:08:05.894110 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 18:08:05.894254 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 18:08:05.894368 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 20 18:08:05.894442 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 20 18:08:05.894506 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 20 18:08:05.894569 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 20 18:08:05.894582 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 20 18:08:05.894589 kernel: PCI host bridge to bus 0000:00 Mar 20 18:08:05.894661 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 20 18:08:05.894723 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 20 18:08:05.894785 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 20 18:08:05.894843 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 18:08:05.894925 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 20 18:08:05.895022 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 18:08:05.895119 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 20 18:08:05.895192 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 20 18:08:05.895261 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 18:08:05.895344 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 18:08:05.895416 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 20 18:08:05.895483 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 20 18:08:05.895548 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 20 18:08:05.895606 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 20 18:08:05.895665 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 20 18:08:05.895674 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 20 18:08:05.895681 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 20 18:08:05.895689 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 20 18:08:05.895695 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 20 18:08:05.895705 kernel: iommu: Default domain type: Translated Mar 20 18:08:05.895712 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 20 18:08:05.895719 kernel: efivars: Registered efivars operations Mar 20 18:08:05.895726 kernel: vgaarb: loaded Mar 20 18:08:05.895733 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 20 18:08:05.895744 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 18:08:05.895752 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 18:08:05.895761 kernel: pnp: PnP ACPI init Mar 20 18:08:05.895843 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 20 18:08:05.895856 kernel: pnp: PnP ACPI: found 1 devices Mar 20 18:08:05.895863 kernel: NET: Registered PF_INET protocol family Mar 20 18:08:05.895870 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 18:08:05.895877 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 18:08:05.895884 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 18:08:05.895891 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 18:08:05.895898 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 18:08:05.895906 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 18:08:05.895915 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 18:08:05.895922 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 18:08:05.895930 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 18:08:05.895937 kernel: PCI: CLS 0 bytes, default 64 Mar 20 18:08:05.895944 kernel: kvm [1]: HYP mode not available Mar 20 18:08:05.895951 kernel: Initialise system trusted keyrings Mar 20 18:08:05.895958 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 18:08:05.895965 kernel: Key type asymmetric registered Mar 20 18:08:05.895972 kernel: Asymmetric key parser 'x509' registered Mar 20 18:08:05.895979 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 20 18:08:05.895987 kernel: io scheduler mq-deadline registered Mar 20 18:08:05.895994 kernel: io scheduler kyber registered Mar 20 18:08:05.896001 kernel: io scheduler bfq registered Mar 20 18:08:05.896009 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 20 18:08:05.896016 kernel: ACPI: button: Power Button [PWRB] Mar 20 18:08:05.896023 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 20 18:08:05.896096 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 20 18:08:05.896112 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 18:08:05.896120 kernel: thunder_xcv, ver 1.0 Mar 20 18:08:05.896130 kernel: thunder_bgx, ver 1.0 Mar 20 18:08:05.896137 kernel: nicpf, ver 1.0 Mar 20 18:08:05.896144 kernel: nicvf, ver 1.0 Mar 20 18:08:05.896228 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 20 18:08:05.896308 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-20T18:08:05 UTC (1742494085) Mar 20 18:08:05.896322 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 20 18:08:05.896329 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 20 18:08:05.896336 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 20 18:08:05.896347 kernel: watchdog: Hard watchdog permanently disabled Mar 20 18:08:05.896355 kernel: NET: Registered PF_INET6 protocol family Mar 20 18:08:05.896362 kernel: Segment Routing with IPv6 Mar 20 18:08:05.896369 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 18:08:05.896376 kernel: NET: Registered PF_PACKET protocol family Mar 20 18:08:05.896383 kernel: Key type dns_resolver registered Mar 20 18:08:05.896390 kernel: registered taskstats version 1 Mar 20 18:08:05.896398 kernel: Loading compiled-in X.509 certificates Mar 20 18:08:05.896407 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 60ca5105dc3f344265f11c7b4aeda632cce92b3c' Mar 20 18:08:05.896421 kernel: Key type .fscrypt registered Mar 20 18:08:05.896428 kernel: Key type fscrypt-provisioning registered Mar 20 18:08:05.896435 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 18:08:05.896442 kernel: ima: Allocated hash algorithm: sha1 Mar 20 18:08:05.896449 kernel: ima: No architecture policies found Mar 20 18:08:05.896456 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 20 18:08:05.896463 kernel: clk: Disabling unused clocks Mar 20 18:08:05.896470 kernel: Freeing unused kernel memory: 38464K Mar 20 18:08:05.896478 kernel: Run /init as init process Mar 20 18:08:05.896486 kernel: with arguments: Mar 20 18:08:05.896493 kernel: /init Mar 20 18:08:05.896500 kernel: with environment: Mar 20 18:08:05.896507 kernel: HOME=/ Mar 20 18:08:05.896514 kernel: TERM=linux Mar 20 18:08:05.896520 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 18:08:05.896528 systemd[1]: Successfully made /usr/ read-only. Mar 20 18:08:05.896539 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 18:08:05.896549 systemd[1]: Detected virtualization kvm. Mar 20 18:08:05.896556 systemd[1]: Detected architecture arm64. Mar 20 18:08:05.896563 systemd[1]: Running in initrd. Mar 20 18:08:05.896571 systemd[1]: No hostname configured, using default hostname. Mar 20 18:08:05.896579 systemd[1]: Hostname set to . Mar 20 18:08:05.896586 systemd[1]: Initializing machine ID from VM UUID. Mar 20 18:08:05.896594 systemd[1]: Queued start job for default target initrd.target. Mar 20 18:08:05.896603 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 18:08:05.896611 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 18:08:05.896619 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 18:08:05.896627 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 18:08:05.896635 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 18:08:05.896643 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 18:08:05.896652 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 18:08:05.896661 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 18:08:05.896669 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 18:08:05.896677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 18:08:05.896685 systemd[1]: Reached target paths.target - Path Units. Mar 20 18:08:05.896693 systemd[1]: Reached target slices.target - Slice Units. Mar 20 18:08:05.896700 systemd[1]: Reached target swap.target - Swaps. Mar 20 18:08:05.896708 systemd[1]: Reached target timers.target - Timer Units. Mar 20 18:08:05.896716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 18:08:05.896724 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 18:08:05.896734 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 18:08:05.896741 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 18:08:05.896749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 18:08:05.896757 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 18:08:05.896764 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 18:08:05.896772 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 18:08:05.896780 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 18:08:05.896788 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 18:08:05.896797 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 18:08:05.896805 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 18:08:05.896813 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 18:08:05.896821 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 18:08:05.896829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 18:08:05.896837 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 18:08:05.896844 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 18:08:05.896854 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 18:08:05.896862 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 18:08:05.896887 systemd-journald[237]: Collecting audit messages is disabled. Mar 20 18:08:05.896909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:08:05.896917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 18:08:05.896926 systemd-journald[237]: Journal started Mar 20 18:08:05.896944 systemd-journald[237]: Runtime Journal (/run/log/journal/fecbbcd5b3d64f4ea38e9b4823e5eda8) is 5.9M, max 47.3M, 41.4M free. Mar 20 18:08:05.886650 systemd-modules-load[238]: Inserted module 'overlay' Mar 20 18:08:05.899590 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 18:08:05.902291 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 18:08:05.903637 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 18:08:05.906501 kernel: Bridge firewalling registered Mar 20 18:08:05.903867 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 20 18:08:05.905424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 18:08:05.912199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 18:08:05.913958 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 18:08:05.916436 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 18:08:05.920907 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 18:08:05.922987 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 18:08:05.928916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 18:08:05.933026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 18:08:05.935171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 18:08:05.938182 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 18:08:05.942420 dracut-cmdline[273]: dracut-dracut-053 Mar 20 18:08:05.944666 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7e8d7de7ff8626488e956fa44b1348d7cdfde9b4a90f4fdae2fb2fe94dbb7bff Mar 20 18:08:05.981697 systemd-resolved[283]: Positive Trust Anchors: Mar 20 18:08:05.984381 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 18:08:05.985352 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 18:08:05.994049 systemd-resolved[283]: Defaulting to hostname 'linux'. Mar 20 18:08:05.995309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 18:08:05.996191 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 18:08:06.022316 kernel: SCSI subsystem initialized Mar 20 18:08:06.026296 kernel: Loading iSCSI transport class v2.0-870. Mar 20 18:08:06.034311 kernel: iscsi: registered transport (tcp) Mar 20 18:08:06.047291 kernel: iscsi: registered transport (qla4xxx) Mar 20 18:08:06.047306 kernel: QLogic iSCSI HBA Driver Mar 20 18:08:06.093509 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 18:08:06.095757 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 18:08:06.131809 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 18:08:06.131865 kernel: device-mapper: uevent: version 1.0.3 Mar 20 18:08:06.133308 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 18:08:06.179304 kernel: raid6: neonx8 gen() 15628 MB/s Mar 20 18:08:06.196292 kernel: raid6: neonx4 gen() 15745 MB/s Mar 20 18:08:06.213291 kernel: raid6: neonx2 gen() 13120 MB/s Mar 20 18:08:06.230289 kernel: raid6: neonx1 gen() 10406 MB/s Mar 20 18:08:06.247293 kernel: raid6: int64x8 gen() 6783 MB/s Mar 20 18:08:06.264285 kernel: raid6: int64x4 gen() 7343 MB/s Mar 20 18:08:06.281292 kernel: raid6: int64x2 gen() 6108 MB/s Mar 20 18:08:06.298288 kernel: raid6: int64x1 gen() 5055 MB/s Mar 20 18:08:06.298302 kernel: raid6: using algorithm neonx4 gen() 15745 MB/s Mar 20 18:08:06.315298 kernel: raid6: .... xor() 12412 MB/s, rmw enabled Mar 20 18:08:06.315318 kernel: raid6: using neon recovery algorithm Mar 20 18:08:06.322393 kernel: xor: measuring software checksum speed Mar 20 18:08:06.322407 kernel: 8regs : 21590 MB/sec Mar 20 18:08:06.323442 kernel: 32regs : 21676 MB/sec Mar 20 18:08:06.323459 kernel: arm64_neon : 28003 MB/sec Mar 20 18:08:06.323470 kernel: xor: using function: arm64_neon (28003 MB/sec) Mar 20 18:08:06.378295 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 18:08:06.388826 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 18:08:06.391349 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 18:08:06.421166 systemd-udevd[463]: Using default interface naming scheme 'v255'. Mar 20 18:08:06.425051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 18:08:06.427662 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 18:08:06.453985 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Mar 20 18:08:06.480735 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 18:08:06.483001 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 18:08:06.532580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 18:08:06.536401 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 18:08:06.555309 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 18:08:06.557144 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 18:08:06.558389 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 18:08:06.560502 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 18:08:06.562433 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 18:08:06.581186 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 18:08:06.584206 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 20 18:08:06.588892 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 18:08:06.588987 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 18:08:06.589004 kernel: GPT:9289727 != 19775487 Mar 20 18:08:06.589013 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 18:08:06.589023 kernel: GPT:9289727 != 19775487 Mar 20 18:08:06.589032 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 18:08:06.589040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 18:08:06.604550 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (509) Mar 20 18:08:06.604596 kernel: BTRFS: device fsid 7c452270-b08f-4ab0-84d1-fe3217dab188 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (513) Mar 20 18:08:06.619952 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 18:08:06.627539 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 18:08:06.634922 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 18:08:06.641695 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 18:08:06.642612 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 18:08:06.646130 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 18:08:06.647039 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 18:08:06.647095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 18:08:06.649313 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 18:08:06.650851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 18:08:06.650899 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:08:06.653341 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 18:08:06.665825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 18:08:06.670643 disk-uuid[548]: Primary Header is updated. Mar 20 18:08:06.670643 disk-uuid[548]: Secondary Entries is updated. Mar 20 18:08:06.670643 disk-uuid[548]: Secondary Header is updated. Mar 20 18:08:06.674321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 18:08:06.678821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:08:06.682513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 18:08:06.713420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 18:08:07.691314 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 18:08:07.691597 disk-uuid[550]: The operation has completed successfully. Mar 20 18:08:07.716867 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 18:08:07.716975 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 18:08:07.741468 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 18:08:07.756082 sh[575]: Success Mar 20 18:08:07.768295 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 20 18:08:07.793735 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 18:08:07.796063 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 18:08:07.811495 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 18:08:07.818062 kernel: BTRFS info (device dm-0): first mount of filesystem 7c452270-b08f-4ab0-84d1-fe3217dab188 Mar 20 18:08:07.818116 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 20 18:08:07.818138 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 18:08:07.818166 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 18:08:07.818635 kernel: BTRFS info (device dm-0): using free space tree Mar 20 18:08:07.822592 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 18:08:07.823624 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 18:08:07.824289 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 18:08:07.826325 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 18:08:07.845778 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:08:07.845812 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 18:08:07.845823 kernel: BTRFS info (device vda6): using free space tree Mar 20 18:08:07.848300 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 18:08:07.852306 kernel: BTRFS info (device vda6): last unmount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:08:07.854655 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 18:08:07.856386 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 18:08:07.929598 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 18:08:07.932270 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 18:08:07.956234 ignition[663]: Ignition 2.20.0 Mar 20 18:08:07.956245 ignition[663]: Stage: fetch-offline Mar 20 18:08:07.956289 ignition[663]: no configs at "/usr/lib/ignition/base.d" Mar 20 18:08:07.956299 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:08:07.956455 ignition[663]: parsed url from cmdline: "" Mar 20 18:08:07.956458 ignition[663]: no config URL provided Mar 20 18:08:07.956462 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 18:08:07.956469 ignition[663]: no config at "/usr/lib/ignition/user.ign" Mar 20 18:08:07.956500 ignition[663]: op(1): [started] loading QEMU firmware config module Mar 20 18:08:07.956504 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 18:08:07.961869 ignition[663]: op(1): [finished] loading QEMU firmware config module Mar 20 18:08:07.970712 systemd-networkd[765]: lo: Link UP Mar 20 18:08:07.970725 systemd-networkd[765]: lo: Gained carrier Mar 20 18:08:07.971550 systemd-networkd[765]: Enumeration completed Mar 20 18:08:07.971661 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 18:08:07.971923 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 18:08:07.971926 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 18:08:07.972931 systemd-networkd[765]: eth0: Link UP Mar 20 18:08:07.972934 systemd-networkd[765]: eth0: Gained carrier Mar 20 18:08:07.972940 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 18:08:07.974211 systemd[1]: Reached target network.target - Network. Mar 20 18:08:08.001341 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 18:08:08.012173 ignition[663]: parsing config with SHA512: 25c86bfb9d28786ffcafca5782674cc46ca15d448548cfa178caf549f8127c230bff57936cd1fe43417a7d2f1e66e4ab9a042e182e095bb4b97dab38f0420463 Mar 20 18:08:08.016738 unknown[663]: fetched base config from "system" Mar 20 18:08:08.016749 unknown[663]: fetched user config from "qemu" Mar 20 18:08:08.017461 ignition[663]: fetch-offline: fetch-offline passed Mar 20 18:08:08.018703 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 18:08:08.017619 ignition[663]: Ignition finished successfully Mar 20 18:08:08.020720 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 18:08:08.021611 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 18:08:08.049999 ignition[774]: Ignition 2.20.0 Mar 20 18:08:08.050008 ignition[774]: Stage: kargs Mar 20 18:08:08.050164 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 20 18:08:08.052618 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 18:08:08.050174 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:08:08.050994 ignition[774]: kargs: kargs passed Mar 20 18:08:08.055270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 18:08:08.051042 ignition[774]: Ignition finished successfully Mar 20 18:08:08.079610 ignition[782]: Ignition 2.20.0 Mar 20 18:08:08.079618 ignition[782]: Stage: disks Mar 20 18:08:08.079768 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 20 18:08:08.082127 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 18:08:08.079778 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:08:08.083663 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 18:08:08.080597 ignition[782]: disks: disks passed Mar 20 18:08:08.085314 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 18:08:08.080641 ignition[782]: Ignition finished successfully Mar 20 18:08:08.087320 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 18:08:08.089119 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 18:08:08.090575 systemd[1]: Reached target basic.target - Basic System. Mar 20 18:08:08.093223 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 18:08:08.112558 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 18:08:08.116575 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 18:08:08.120065 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 18:08:08.180223 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 18:08:08.181765 kernel: EXT4-fs (vda9): mounted filesystem b7437caf-1938-4bc6-8e3f-9394bb7ad561 r/w with ordered data mode. Quota mode: none. Mar 20 18:08:08.181567 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 18:08:08.183871 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 18:08:08.185406 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 18:08:08.186386 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 18:08:08.186429 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 18:08:08.186452 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 18:08:08.200652 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 18:08:08.202980 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 18:08:08.207716 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (802) Mar 20 18:08:08.207740 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:08:08.207751 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 18:08:08.207761 kernel: BTRFS info (device vda6): using free space tree Mar 20 18:08:08.210294 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 18:08:08.210671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 18:08:08.248848 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 18:08:08.252929 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Mar 20 18:08:08.257302 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 18:08:08.259972 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 18:08:08.322238 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 18:08:08.324273 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 18:08:08.326404 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 18:08:08.340295 kernel: BTRFS info (device vda6): last unmount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:08:08.351467 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 18:08:08.360464 ignition[916]: INFO : Ignition 2.20.0 Mar 20 18:08:08.360464 ignition[916]: INFO : Stage: mount Mar 20 18:08:08.362072 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 18:08:08.362072 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:08:08.362072 ignition[916]: INFO : mount: mount passed Mar 20 18:08:08.362072 ignition[916]: INFO : Ignition finished successfully Mar 20 18:08:08.363544 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 18:08:08.365831 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 18:08:08.948246 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 18:08:08.949754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 18:08:08.970292 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (928) Mar 20 18:08:08.972497 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:08:08.972522 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 18:08:08.972540 kernel: BTRFS info (device vda6): using free space tree Mar 20 18:08:08.974291 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 18:08:08.975420 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 18:08:08.995510 ignition[945]: INFO : Ignition 2.20.0 Mar 20 18:08:08.995510 ignition[945]: INFO : Stage: files Mar 20 18:08:08.997127 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 18:08:08.997127 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:08:08.997127 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Mar 20 18:08:09.000807 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 18:08:09.000807 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 18:08:09.000807 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 18:08:09.000807 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 18:08:09.000807 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 18:08:09.000012 unknown[945]: wrote ssh authorized keys file for user: core Mar 20 18:08:09.008602 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 18:08:09.008602 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 20 18:08:09.053368 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 18:08:09.500200 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 18:08:09.502058 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 18:08:09.503935 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 20 18:08:09.856925 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 20 18:08:09.892878 systemd-networkd[765]: eth0: Gained IPv6LL Mar 20 18:08:10.070105 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 18:08:10.070105 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 20 18:08:10.072901 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 18:08:10.072901 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 18:08:10.072901 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 20 18:08:10.072901 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 20 18:08:10.072901 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 18:08:10.072901 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 18:08:10.072901 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 20 18:08:10.072901 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 18:08:10.084994 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 18:08:10.088034 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 18:08:10.090173 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 18:08:10.090173 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 20 18:08:10.090173 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 18:08:10.090173 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 18:08:10.090173 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 18:08:10.090173 ignition[945]: INFO : files: files passed Mar 20 18:08:10.090173 ignition[945]: INFO : Ignition finished successfully Mar 20 18:08:10.091531 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 18:08:10.095390 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 18:08:10.111414 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 18:08:10.114578 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 18:08:10.115324 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 18:08:10.118157 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 18:08:10.120352 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 18:08:10.120352 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 18:08:10.123443 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 18:08:10.122965 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 18:08:10.124619 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 18:08:10.127038 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 18:08:10.165663 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 18:08:10.165757 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 18:08:10.167544 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 18:08:10.169102 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 18:08:10.170640 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 18:08:10.171314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 18:08:10.185503 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 18:08:10.187408 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 18:08:10.216338 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 18:08:10.217232 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 18:08:10.219179 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 18:08:10.220754 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 18:08:10.220866 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 18:08:10.223061 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 18:08:10.224866 systemd[1]: Stopped target basic.target - Basic System. Mar 20 18:08:10.226243 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 18:08:10.227694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 18:08:10.229306 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 18:08:10.231103 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 18:08:10.232689 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 18:08:10.234306 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 18:08:10.236021 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 18:08:10.237610 systemd[1]: Stopped target swap.target - Swaps. Mar 20 18:08:10.238914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 18:08:10.239019 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 18:08:10.241023 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 18:08:10.241890 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 18:08:10.243488 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 18:08:10.244360 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 18:08:10.246196 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 18:08:10.246315 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 18:08:10.248977 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 18:08:10.249100 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 18:08:10.250737 systemd[1]: Stopped target paths.target - Path Units. Mar 20 18:08:10.252076 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 18:08:10.252178 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 18:08:10.253808 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 18:08:10.255472 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 18:08:10.256790 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 18:08:10.256870 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 18:08:10.258325 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 18:08:10.258403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 18:08:10.260246 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 18:08:10.260362 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 18:08:10.261900 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 18:08:10.262005 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 18:08:10.263966 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 18:08:10.265295 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 18:08:10.266002 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 18:08:10.266130 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 18:08:10.267602 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 18:08:10.267704 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 18:08:10.273487 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 18:08:10.273579 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 18:08:10.281011 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 18:08:10.283917 ignition[1001]: INFO : Ignition 2.20.0 Mar 20 18:08:10.283917 ignition[1001]: INFO : Stage: umount Mar 20 18:08:10.285537 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 18:08:10.285537 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:08:10.285537 ignition[1001]: INFO : umount: umount passed Mar 20 18:08:10.285537 ignition[1001]: INFO : Ignition finished successfully Mar 20 18:08:10.286551 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 18:08:10.288309 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 18:08:10.289226 systemd[1]: Stopped target network.target - Network. Mar 20 18:08:10.290520 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 18:08:10.290574 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 18:08:10.292079 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 18:08:10.292129 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 18:08:10.293622 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 18:08:10.293661 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 18:08:10.295054 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 18:08:10.295100 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 18:08:10.296758 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 18:08:10.298251 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 18:08:10.307022 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 18:08:10.307144 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 18:08:10.310583 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 18:08:10.310757 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 18:08:10.310841 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 18:08:10.313320 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 18:08:10.313853 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 18:08:10.313906 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 18:08:10.315610 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 18:08:10.316885 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 18:08:10.316933 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 18:08:10.318417 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 18:08:10.318456 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 18:08:10.320424 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 18:08:10.320470 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 18:08:10.321911 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 18:08:10.321954 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 18:08:10.324166 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 18:08:10.327023 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 18:08:10.327074 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 18:08:10.338507 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 18:08:10.338612 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 18:08:10.340159 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 18:08:10.340238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 18:08:10.341473 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 18:08:10.341583 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 18:08:10.343807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 18:08:10.343855 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 18:08:10.344726 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 18:08:10.344758 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 18:08:10.346116 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 18:08:10.346159 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 18:08:10.348266 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 18:08:10.348317 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 18:08:10.350467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 18:08:10.350505 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 18:08:10.352586 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 18:08:10.352627 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 18:08:10.354625 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 18:08:10.355935 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 18:08:10.355984 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 18:08:10.358223 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 20 18:08:10.358263 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 18:08:10.359844 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 18:08:10.359884 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 18:08:10.361262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 18:08:10.361324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:08:10.364170 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 20 18:08:10.364224 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 18:08:10.368999 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 18:08:10.369119 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 18:08:10.370181 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 18:08:10.372372 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 18:08:10.380570 systemd[1]: Switching root. Mar 20 18:08:10.410023 systemd-journald[237]: Journal stopped Mar 20 18:08:11.123806 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 20 18:08:11.123863 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 18:08:11.123876 kernel: SELinux: policy capability open_perms=1 Mar 20 18:08:11.123890 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 18:08:11.123900 kernel: SELinux: policy capability always_check_network=0 Mar 20 18:08:11.123909 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 18:08:11.123927 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 18:08:11.123938 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 18:08:11.123948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 18:08:11.123957 kernel: audit: type=1403 audit(1742494090.547:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 18:08:11.123968 systemd[1]: Successfully loaded SELinux policy in 31.192ms. Mar 20 18:08:11.123985 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.512ms. Mar 20 18:08:11.123998 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 18:08:11.124018 systemd[1]: Detected virtualization kvm. Mar 20 18:08:11.124029 systemd[1]: Detected architecture arm64. Mar 20 18:08:11.124040 systemd[1]: Detected first boot. Mar 20 18:08:11.124050 systemd[1]: Initializing machine ID from VM UUID. Mar 20 18:08:11.124069 zram_generator::config[1048]: No configuration found. Mar 20 18:08:11.124081 kernel: NET: Registered PF_VSOCK protocol family Mar 20 18:08:11.124103 systemd[1]: Populated /etc with preset unit settings. Mar 20 18:08:11.124116 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 18:08:11.124126 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 18:08:11.124137 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 18:08:11.124147 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 18:08:11.124158 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 18:08:11.124169 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 18:08:11.124179 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 18:08:11.124189 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 18:08:11.124199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 18:08:11.124211 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 18:08:11.124222 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 18:08:11.124232 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 18:08:11.124243 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 18:08:11.124253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 18:08:11.124264 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 18:08:11.124274 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 18:08:11.124302 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 18:08:11.124315 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 18:08:11.124325 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 20 18:08:11.124335 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 18:08:11.124346 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 18:08:11.124356 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 18:08:11.124366 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 18:08:11.124381 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 18:08:11.124400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 18:08:11.124417 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 18:08:11.124431 systemd[1]: Reached target slices.target - Slice Units. Mar 20 18:08:11.124444 systemd[1]: Reached target swap.target - Swaps. Mar 20 18:08:11.124459 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 18:08:11.124469 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 18:08:11.124483 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 18:08:11.124494 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 18:08:11.124504 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 18:08:11.124514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 18:08:11.124526 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 18:08:11.124537 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 18:08:11.124547 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 18:08:11.124557 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 18:08:11.124567 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 18:08:11.124577 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 18:08:11.124588 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 18:08:11.124599 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 18:08:11.124609 systemd[1]: Reached target machines.target - Containers. Mar 20 18:08:11.124621 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 18:08:11.124632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 18:08:11.124642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 18:08:11.124654 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 18:08:11.124664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 18:08:11.124674 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 18:08:11.124684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 18:08:11.124694 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 18:08:11.124706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 18:08:11.124717 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 18:08:11.124727 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 18:08:11.124738 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 18:08:11.124748 kernel: fuse: init (API version 7.39) Mar 20 18:08:11.124757 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 18:08:11.124767 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 18:08:11.124778 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 18:08:11.124790 kernel: loop: module loaded Mar 20 18:08:11.124800 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 18:08:11.124811 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 18:08:11.124821 kernel: ACPI: bus type drm_connector registered Mar 20 18:08:11.124831 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 18:08:11.124841 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 18:08:11.124851 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 18:08:11.124861 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 18:08:11.124873 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 18:08:11.124883 systemd[1]: Stopped verity-setup.service. Mar 20 18:08:11.124895 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 18:08:11.124905 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 18:08:11.124916 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 18:08:11.124928 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 18:08:11.124944 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 18:08:11.124975 systemd-journald[1116]: Collecting audit messages is disabled. Mar 20 18:08:11.124997 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 18:08:11.125008 systemd-journald[1116]: Journal started Mar 20 18:08:11.125028 systemd-journald[1116]: Runtime Journal (/run/log/journal/fecbbcd5b3d64f4ea38e9b4823e5eda8) is 5.9M, max 47.3M, 41.4M free. Mar 20 18:08:10.929879 systemd[1]: Queued start job for default target multi-user.target. Mar 20 18:08:10.941183 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 18:08:10.941566 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 18:08:11.127359 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 18:08:11.128742 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 18:08:11.129521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 18:08:11.130732 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 18:08:11.130903 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 18:08:11.132052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 18:08:11.132231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 18:08:11.133389 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 18:08:11.133545 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 18:08:11.134590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 18:08:11.134761 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 18:08:11.135936 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 18:08:11.136103 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 18:08:11.137259 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 18:08:11.137433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 18:08:11.138723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 18:08:11.139910 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 18:08:11.142331 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 18:08:11.143547 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 18:08:11.156143 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 18:08:11.158533 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 18:08:11.160424 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 18:08:11.161295 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 18:08:11.161331 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 18:08:11.162989 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 18:08:11.168436 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 18:08:11.170327 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 18:08:11.171348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 18:08:11.172541 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 18:08:11.174245 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 18:08:11.175334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 18:08:11.176164 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 18:08:11.177139 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 18:08:11.177998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 18:08:11.179915 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 18:08:11.181345 systemd-journald[1116]: Time spent on flushing to /var/log/journal/fecbbcd5b3d64f4ea38e9b4823e5eda8 is 20.302ms for 869 entries. Mar 20 18:08:11.181345 systemd-journald[1116]: System Journal (/var/log/journal/fecbbcd5b3d64f4ea38e9b4823e5eda8) is 8M, max 195.6M, 187.6M free. Mar 20 18:08:11.216270 systemd-journald[1116]: Received client request to flush runtime journal. Mar 20 18:08:11.216365 kernel: loop0: detected capacity change from 0 to 126448 Mar 20 18:08:11.216386 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 18:08:11.181697 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 18:08:11.184892 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 18:08:11.186080 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 18:08:11.188525 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 18:08:11.190156 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 18:08:11.196520 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 18:08:11.198668 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 18:08:11.203577 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 18:08:11.206414 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 18:08:11.221782 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 18:08:11.226254 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 20 18:08:11.228170 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Mar 20 18:08:11.228185 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Mar 20 18:08:11.231460 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 18:08:11.234412 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 18:08:11.238521 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 18:08:11.250512 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 18:08:11.261300 kernel: loop1: detected capacity change from 0 to 103832 Mar 20 18:08:11.272836 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 18:08:11.275517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 18:08:11.291315 kernel: loop2: detected capacity change from 0 to 189592 Mar 20 18:08:11.296552 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 20 18:08:11.296839 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 20 18:08:11.301159 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 18:08:11.315317 kernel: loop3: detected capacity change from 0 to 126448 Mar 20 18:08:11.320298 kernel: loop4: detected capacity change from 0 to 103832 Mar 20 18:08:11.325298 kernel: loop5: detected capacity change from 0 to 189592 Mar 20 18:08:11.329115 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 18:08:11.329502 (sd-merge)[1192]: Merged extensions into '/usr'. Mar 20 18:08:11.334627 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 18:08:11.334748 systemd[1]: Reloading... Mar 20 18:08:11.400322 zram_generator::config[1223]: No configuration found. Mar 20 18:08:11.454862 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 18:08:11.494110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 18:08:11.545320 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 18:08:11.545800 systemd[1]: Reloading finished in 210 ms. Mar 20 18:08:11.566023 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 18:08:11.567506 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 18:08:11.582503 systemd[1]: Starting ensure-sysext.service... Mar 20 18:08:11.584246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 18:08:11.597116 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Mar 20 18:08:11.597131 systemd[1]: Reloading... Mar 20 18:08:11.607421 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 18:08:11.607625 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 18:08:11.608252 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 18:08:11.608471 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 20 18:08:11.608527 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 20 18:08:11.611164 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 18:08:11.611171 systemd-tmpfiles[1255]: Skipping /boot Mar 20 18:08:11.620572 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 18:08:11.620591 systemd-tmpfiles[1255]: Skipping /boot Mar 20 18:08:11.647450 zram_generator::config[1282]: No configuration found. Mar 20 18:08:11.728605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 18:08:11.778443 systemd[1]: Reloading finished in 181 ms. Mar 20 18:08:11.790903 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 18:08:11.806671 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 18:08:11.814370 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 18:08:11.816690 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 18:08:11.818923 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 18:08:11.821786 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 18:08:11.829435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 18:08:11.831582 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 18:08:11.837018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 18:08:11.839996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 18:08:11.842523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 18:08:11.845465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 18:08:11.846602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 18:08:11.846714 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 18:08:11.848485 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 18:08:11.850506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 18:08:11.852696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 18:08:11.854335 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 18:08:11.854479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 18:08:11.859897 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 18:08:11.860057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 18:08:11.864189 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 18:08:11.867489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 18:08:11.869647 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 18:08:11.873390 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Mar 20 18:08:11.873598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 18:08:11.879397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 18:08:11.883899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 18:08:11.884068 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 18:08:11.888510 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 18:08:11.892320 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 18:08:11.893898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 18:08:11.894058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 18:08:11.895522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 18:08:11.895694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 18:08:11.897241 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 18:08:11.897420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 18:08:11.898945 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 18:08:11.903675 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 18:08:11.907468 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 18:08:11.909774 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 18:08:11.915485 augenrules[1375]: No rules Mar 20 18:08:11.917783 systemd[1]: Finished ensure-sysext.service. Mar 20 18:08:11.918903 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 18:08:11.919111 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 18:08:11.929225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 18:08:11.931122 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 18:08:11.933559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 18:08:11.938524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 18:08:11.946299 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1368) Mar 20 18:08:11.949062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 18:08:11.950032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 18:08:11.950125 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 18:08:11.952732 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 18:08:11.958786 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 18:08:11.959634 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 18:08:11.963207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 18:08:11.963410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 18:08:11.964823 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 18:08:11.965001 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 18:08:11.966443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 18:08:11.966579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 18:08:11.969247 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 18:08:11.969423 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 18:08:11.982926 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 20 18:08:11.990918 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 18:08:11.998922 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 18:08:12.000115 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 18:08:12.000168 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 18:08:12.025473 systemd-resolved[1324]: Positive Trust Anchors: Mar 20 18:08:12.025489 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 18:08:12.025519 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 18:08:12.027872 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 18:08:12.033551 systemd-resolved[1324]: Defaulting to hostname 'linux'. Mar 20 18:08:12.034977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 18:08:12.036221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 18:08:12.041648 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 18:08:12.043164 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 18:08:12.071552 systemd-networkd[1399]: lo: Link UP Mar 20 18:08:12.071558 systemd-networkd[1399]: lo: Gained carrier Mar 20 18:08:12.072357 systemd-networkd[1399]: Enumeration completed Mar 20 18:08:12.072450 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 18:08:12.073709 systemd[1]: Reached target network.target - Network. Mar 20 18:08:12.074859 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 18:08:12.074932 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 18:08:12.075536 systemd-networkd[1399]: eth0: Link UP Mar 20 18:08:12.075614 systemd-networkd[1399]: eth0: Gained carrier Mar 20 18:08:12.075665 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 18:08:12.076369 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 18:08:12.080808 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 18:08:12.095679 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 18:08:12.096202 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Mar 20 18:08:12.096599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 18:08:12.097470 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 18:08:12.097516 systemd-timesyncd[1401]: Initial clock synchronization to Thu 2025-03-20 18:08:12.267814 UTC. Mar 20 18:08:12.108641 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 18:08:12.112001 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 18:08:12.113484 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 18:08:12.142022 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 18:08:12.142756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:08:12.174740 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 18:08:12.176249 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 18:08:12.177369 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 18:08:12.178517 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 18:08:12.179712 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 18:08:12.181068 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 18:08:12.182270 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 18:08:12.183510 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 18:08:12.184706 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 18:08:12.184742 systemd[1]: Reached target paths.target - Path Units. Mar 20 18:08:12.185643 systemd[1]: Reached target timers.target - Timer Units. Mar 20 18:08:12.187501 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 18:08:12.189936 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 18:08:12.193162 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 18:08:12.194593 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 18:08:12.195832 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 18:08:12.198977 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 18:08:12.200456 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 18:08:12.202699 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 18:08:12.204354 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 18:08:12.205506 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 18:08:12.206451 systemd[1]: Reached target basic.target - Basic System. Mar 20 18:08:12.207421 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 18:08:12.207453 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 18:08:12.208368 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 18:08:12.209948 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 18:08:12.210238 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 18:08:12.216387 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 18:08:12.218018 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 18:08:12.218913 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 18:08:12.219756 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 18:08:12.221731 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 18:08:12.227061 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 18:08:12.229355 jq[1435]: false Mar 20 18:08:12.229834 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 18:08:12.232674 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 18:08:12.234341 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 18:08:12.235522 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 18:08:12.236516 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 18:08:12.238410 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 18:08:12.242320 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 18:08:12.244433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 18:08:12.244600 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 18:08:12.245030 extend-filesystems[1436]: Found loop3 Mar 20 18:08:12.245030 extend-filesystems[1436]: Found loop4 Mar 20 18:08:12.245030 extend-filesystems[1436]: Found loop5 Mar 20 18:08:12.245030 extend-filesystems[1436]: Found vda Mar 20 18:08:12.245030 extend-filesystems[1436]: Found vda1 Mar 20 18:08:12.259336 extend-filesystems[1436]: Found vda2 Mar 20 18:08:12.259336 extend-filesystems[1436]: Found vda3 Mar 20 18:08:12.259336 extend-filesystems[1436]: Found usr Mar 20 18:08:12.259336 extend-filesystems[1436]: Found vda4 Mar 20 18:08:12.259336 extend-filesystems[1436]: Found vda6 Mar 20 18:08:12.259336 extend-filesystems[1436]: Found vda7 Mar 20 18:08:12.259336 extend-filesystems[1436]: Found vda9 Mar 20 18:08:12.259336 extend-filesystems[1436]: Checking size of /dev/vda9 Mar 20 18:08:12.256143 dbus-daemon[1434]: [system] SELinux support is enabled Mar 20 18:08:12.274973 jq[1446]: true Mar 20 18:08:12.247197 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 18:08:12.247375 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 18:08:12.261727 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 18:08:12.276734 jq[1459]: true Mar 20 18:08:12.263329 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 18:08:12.264385 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 18:08:12.273785 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 18:08:12.273807 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 18:08:12.275430 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 18:08:12.275459 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 18:08:12.280785 extend-filesystems[1436]: Resized partition /dev/vda9 Mar 20 18:08:12.282455 tar[1452]: linux-arm64/helm Mar 20 18:08:12.291643 extend-filesystems[1473]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 18:08:12.291791 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 18:08:12.293343 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 18:08:12.308408 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1367) Mar 20 18:08:12.321796 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 18:08:12.331461 update_engine[1445]: I20250320 18:08:12.319450 1445 main.cc:92] Flatcar Update Engine starting Mar 20 18:08:12.331461 update_engine[1445]: I20250320 18:08:12.323544 1445 update_check_scheduler.cc:74] Next update check in 11m52s Mar 20 18:08:12.326449 systemd[1]: Started update-engine.service - Update Engine. Mar 20 18:08:12.330496 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 18:08:12.334798 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 18:08:12.334798 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 18:08:12.334798 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 18:08:12.339330 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Mar 20 18:08:12.338715 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 18:08:12.338902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 18:08:12.342646 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Mar 20 18:08:12.345854 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 18:08:12.347300 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 18:08:12.349951 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (Power Button) Mar 20 18:08:12.353209 systemd-logind[1442]: New seat seat0. Mar 20 18:08:12.362654 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 18:08:12.415995 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 18:08:12.518601 containerd[1465]: time="2025-03-20T18:08:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 18:08:12.520334 containerd[1465]: time="2025-03-20T18:08:12.519378800Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 18:08:12.529945 containerd[1465]: time="2025-03-20T18:08:12.529903600Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.56µs" Mar 20 18:08:12.529945 containerd[1465]: time="2025-03-20T18:08:12.529941440Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 18:08:12.530031 containerd[1465]: time="2025-03-20T18:08:12.529959520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 18:08:12.530136 containerd[1465]: time="2025-03-20T18:08:12.530116920Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 18:08:12.530165 containerd[1465]: time="2025-03-20T18:08:12.530139360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 18:08:12.530205 containerd[1465]: time="2025-03-20T18:08:12.530164880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530224 containerd[1465]: time="2025-03-20T18:08:12.530213400Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530242 containerd[1465]: time="2025-03-20T18:08:12.530226360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530572 containerd[1465]: time="2025-03-20T18:08:12.530529760Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530572 containerd[1465]: time="2025-03-20T18:08:12.530551800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530572 containerd[1465]: time="2025-03-20T18:08:12.530562840Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530572 containerd[1465]: time="2025-03-20T18:08:12.530571200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530669 containerd[1465]: time="2025-03-20T18:08:12.530643560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530843 containerd[1465]: time="2025-03-20T18:08:12.530823160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530873 containerd[1465]: time="2025-03-20T18:08:12.530860920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 18:08:12.530892 containerd[1465]: time="2025-03-20T18:08:12.530874560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 18:08:12.530920 containerd[1465]: time="2025-03-20T18:08:12.530904400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 18:08:12.532457 containerd[1465]: time="2025-03-20T18:08:12.532398600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 18:08:12.532512 containerd[1465]: time="2025-03-20T18:08:12.532492320Z" level=info msg="metadata content store policy set" policy=shared Mar 20 18:08:12.537729 containerd[1465]: time="2025-03-20T18:08:12.537689480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537751720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537768360Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537780440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537793320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537804680Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537817560Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537829720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537840800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537851280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537860560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 18:08:12.537871 containerd[1465]: time="2025-03-20T18:08:12.537872800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538009080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538030160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538049440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538060600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538071360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538091840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538105400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538122800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 18:08:12.538141 containerd[1465]: time="2025-03-20T18:08:12.538137920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 18:08:12.538300 containerd[1465]: time="2025-03-20T18:08:12.538149400Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 18:08:12.538300 containerd[1465]: time="2025-03-20T18:08:12.538166720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 18:08:12.538506 containerd[1465]: time="2025-03-20T18:08:12.538439800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 18:08:12.538506 containerd[1465]: time="2025-03-20T18:08:12.538465600Z" level=info msg="Start snapshots syncer" Mar 20 18:08:12.538506 containerd[1465]: time="2025-03-20T18:08:12.538489160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 18:08:12.538772 containerd[1465]: time="2025-03-20T18:08:12.538713760Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 18:08:12.538772 containerd[1465]: time="2025-03-20T18:08:12.538769720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 18:08:12.538984 containerd[1465]: time="2025-03-20T18:08:12.538841840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 18:08:12.538984 containerd[1465]: time="2025-03-20T18:08:12.538946200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 18:08:12.538984 containerd[1465]: time="2025-03-20T18:08:12.538968880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 18:08:12.538984 containerd[1465]: time="2025-03-20T18:08:12.538979920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 18:08:12.539061 containerd[1465]: time="2025-03-20T18:08:12.538990280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 18:08:12.539061 containerd[1465]: time="2025-03-20T18:08:12.539004280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 18:08:12.539061 containerd[1465]: time="2025-03-20T18:08:12.539014720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 18:08:12.539061 containerd[1465]: time="2025-03-20T18:08:12.539026480Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 18:08:12.539061 containerd[1465]: time="2025-03-20T18:08:12.539051240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 18:08:12.539154 containerd[1465]: time="2025-03-20T18:08:12.539071560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 18:08:12.539154 containerd[1465]: time="2025-03-20T18:08:12.539095480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 18:08:12.539154 containerd[1465]: time="2025-03-20T18:08:12.539129240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 18:08:12.539154 containerd[1465]: time="2025-03-20T18:08:12.539142600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 18:08:12.539154 containerd[1465]: time="2025-03-20T18:08:12.539151240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 18:08:12.539236 containerd[1465]: time="2025-03-20T18:08:12.539161040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 18:08:12.539236 containerd[1465]: time="2025-03-20T18:08:12.539170480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 18:08:12.539236 containerd[1465]: time="2025-03-20T18:08:12.539181440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 18:08:12.539236 containerd[1465]: time="2025-03-20T18:08:12.539192520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 18:08:12.539324 containerd[1465]: time="2025-03-20T18:08:12.539268800Z" level=info msg="runtime interface created" Mar 20 18:08:12.539324 containerd[1465]: time="2025-03-20T18:08:12.539274320Z" level=info msg="created NRI interface" Mar 20 18:08:12.539324 containerd[1465]: time="2025-03-20T18:08:12.539299920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 18:08:12.539324 containerd[1465]: time="2025-03-20T18:08:12.539311280Z" level=info msg="Connect containerd service" Mar 20 18:08:12.539387 containerd[1465]: time="2025-03-20T18:08:12.539347440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 18:08:12.540017 containerd[1465]: time="2025-03-20T18:08:12.539986760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649457960Z" level=info msg="Start subscribing containerd event" Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649529480Z" level=info msg="Start recovering state" Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649723880Z" level=info msg="Start event monitor" Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649739120Z" level=info msg="Start cni network conf syncer for default" Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649748520Z" level=info msg="Start streaming server" Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649758360Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649765360Z" level=info msg="runtime interface starting up..." Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649771560Z" level=info msg="starting plugins..." Mar 20 18:08:12.649970 containerd[1465]: time="2025-03-20T18:08:12.649786800Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 18:08:12.650265 containerd[1465]: time="2025-03-20T18:08:12.649992040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 18:08:12.650346 containerd[1465]: time="2025-03-20T18:08:12.650322240Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 18:08:12.650499 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 18:08:12.651798 containerd[1465]: time="2025-03-20T18:08:12.650496080Z" level=info msg="containerd successfully booted in 0.133583s" Mar 20 18:08:12.689562 tar[1452]: linux-arm64/LICENSE Mar 20 18:08:12.689673 tar[1452]: linux-arm64/README.md Mar 20 18:08:12.706531 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 18:08:13.027461 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 18:08:13.046791 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 18:08:13.049697 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 18:08:13.078002 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 18:08:13.078235 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 18:08:13.081225 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 18:08:13.104022 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 18:08:13.107052 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 18:08:13.109402 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 20 18:08:13.110840 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 18:08:13.156733 systemd-networkd[1399]: eth0: Gained IPv6LL Mar 20 18:08:13.159083 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 18:08:13.161013 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 18:08:13.163653 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 18:08:13.166036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:08:13.177289 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 18:08:13.197457 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 18:08:13.198895 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 18:08:13.199090 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 18:08:13.201624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 18:08:13.667900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:13.669854 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 18:08:13.671462 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 18:08:13.675857 systemd[1]: Startup finished in 547ms (kernel) + 4.840s (initrd) + 3.162s (userspace) = 8.550s. Mar 20 18:08:14.166661 kubelet[1561]: E0320 18:08:14.166561 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 18:08:14.169156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 18:08:14.169319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 18:08:14.169691 systemd[1]: kubelet.service: Consumed 810ms CPU time, 233.9M memory peak. Mar 20 18:08:18.762782 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 18:08:18.763883 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:57644.service - OpenSSH per-connection server daemon (10.0.0.1:57644). Mar 20 18:08:18.818613 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 57644 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:08:18.822247 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:08:18.833422 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 18:08:18.834281 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 18:08:18.839167 systemd-logind[1442]: New session 1 of user core. Mar 20 18:08:18.854949 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 18:08:18.857273 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 18:08:18.872183 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 18:08:18.874185 systemd-logind[1442]: New session c1 of user core. Mar 20 18:08:18.977981 systemd[1579]: Queued start job for default target default.target. Mar 20 18:08:18.987218 systemd[1579]: Created slice app.slice - User Application Slice. Mar 20 18:08:18.987249 systemd[1579]: Reached target paths.target - Paths. Mar 20 18:08:18.987285 systemd[1579]: Reached target timers.target - Timers. Mar 20 18:08:18.988554 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 18:08:18.997548 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 18:08:18.997609 systemd[1579]: Reached target sockets.target - Sockets. Mar 20 18:08:18.997645 systemd[1579]: Reached target basic.target - Basic System. Mar 20 18:08:18.997672 systemd[1579]: Reached target default.target - Main User Target. Mar 20 18:08:18.997707 systemd[1579]: Startup finished in 118ms. Mar 20 18:08:18.997975 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 18:08:18.999412 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 18:08:19.068393 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:57646.service - OpenSSH per-connection server daemon (10.0.0.1:57646). Mar 20 18:08:19.115547 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 57646 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:08:19.116697 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:08:19.120945 systemd-logind[1442]: New session 2 of user core. Mar 20 18:08:19.128504 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 18:08:19.179224 sshd[1592]: Connection closed by 10.0.0.1 port 57646 Mar 20 18:08:19.179663 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Mar 20 18:08:19.189532 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:57646.service: Deactivated successfully. Mar 20 18:08:19.192674 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 18:08:19.193270 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Mar 20 18:08:19.195531 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:57654.service - OpenSSH per-connection server daemon (10.0.0.1:57654). Mar 20 18:08:19.196805 systemd-logind[1442]: Removed session 2. Mar 20 18:08:19.243669 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 57654 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:08:19.244862 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:08:19.249131 systemd-logind[1442]: New session 3 of user core. Mar 20 18:08:19.260443 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 18:08:19.308078 sshd[1600]: Connection closed by 10.0.0.1 port 57654 Mar 20 18:08:19.308592 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Mar 20 18:08:19.322110 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:57654.service: Deactivated successfully. Mar 20 18:08:19.324108 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 18:08:19.324815 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Mar 20 18:08:19.326545 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:57660.service - OpenSSH per-connection server daemon (10.0.0.1:57660). Mar 20 18:08:19.327418 systemd-logind[1442]: Removed session 3. Mar 20 18:08:19.372488 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 57660 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:08:19.373726 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:08:19.378248 systemd-logind[1442]: New session 4 of user core. Mar 20 18:08:19.389460 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 18:08:19.440816 sshd[1608]: Connection closed by 10.0.0.1 port 57660 Mar 20 18:08:19.441188 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Mar 20 18:08:19.459511 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:57660.service: Deactivated successfully. Mar 20 18:08:19.460841 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 18:08:19.462469 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Mar 20 18:08:19.464085 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:57670.service - OpenSSH per-connection server daemon (10.0.0.1:57670). Mar 20 18:08:19.464985 systemd-logind[1442]: Removed session 4. Mar 20 18:08:19.519467 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 57670 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:08:19.520587 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:08:19.525071 systemd-logind[1442]: New session 5 of user core. Mar 20 18:08:19.534482 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 18:08:19.600353 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 18:08:19.602813 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 18:08:19.621105 sudo[1617]: pam_unix(sudo:session): session closed for user root Mar 20 18:08:19.622898 sshd[1616]: Connection closed by 10.0.0.1 port 57670 Mar 20 18:08:19.622727 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Mar 20 18:08:19.644506 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:57670.service: Deactivated successfully. Mar 20 18:08:19.647530 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 18:08:19.648452 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Mar 20 18:08:19.650039 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:57672.service - OpenSSH per-connection server daemon (10.0.0.1:57672). Mar 20 18:08:19.651797 systemd-logind[1442]: Removed session 5. Mar 20 18:08:19.703117 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 57672 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:08:19.704388 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:08:19.709023 systemd-logind[1442]: New session 6 of user core. Mar 20 18:08:19.718066 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 18:08:19.769511 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 18:08:19.770134 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 18:08:19.773356 sudo[1627]: pam_unix(sudo:session): session closed for user root Mar 20 18:08:19.777873 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 18:08:19.778148 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 18:08:19.786030 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 18:08:19.821808 augenrules[1649]: No rules Mar 20 18:08:19.822916 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 18:08:19.823145 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 18:08:19.824475 sudo[1626]: pam_unix(sudo:session): session closed for user root Mar 20 18:08:19.825618 sshd[1625]: Connection closed by 10.0.0.1 port 57672 Mar 20 18:08:19.826062 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Mar 20 18:08:19.832995 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:57672.service: Deactivated successfully. Mar 20 18:08:19.835582 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 18:08:19.836163 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Mar 20 18:08:19.837233 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:57684.service - OpenSSH per-connection server daemon (10.0.0.1:57684). Mar 20 18:08:19.838744 systemd-logind[1442]: Removed session 6. Mar 20 18:08:19.885456 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 57684 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:08:19.886796 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:08:19.891086 systemd-logind[1442]: New session 7 of user core. Mar 20 18:08:19.901424 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 18:08:19.951761 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 18:08:19.952028 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 18:08:20.288681 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 18:08:20.310632 (dockerd)[1682]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 18:08:20.558924 dockerd[1682]: time="2025-03-20T18:08:20.558786160Z" level=info msg="Starting up" Mar 20 18:08:20.560238 dockerd[1682]: time="2025-03-20T18:08:20.560212231Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 18:08:20.660944 dockerd[1682]: time="2025-03-20T18:08:20.660884674Z" level=info msg="Loading containers: start." Mar 20 18:08:20.795314 kernel: Initializing XFRM netlink socket Mar 20 18:08:20.847001 systemd-networkd[1399]: docker0: Link UP Mar 20 18:08:20.907366 dockerd[1682]: time="2025-03-20T18:08:20.907325406Z" level=info msg="Loading containers: done." Mar 20 18:08:20.921996 dockerd[1682]: time="2025-03-20T18:08:20.921616731Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 18:08:20.921996 dockerd[1682]: time="2025-03-20T18:08:20.921691672Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 18:08:20.921996 dockerd[1682]: time="2025-03-20T18:08:20.921866281Z" level=info msg="Daemon has completed initialization" Mar 20 18:08:20.952326 dockerd[1682]: time="2025-03-20T18:08:20.952262818Z" level=info msg="API listen on /run/docker.sock" Mar 20 18:08:20.952447 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 18:08:21.774901 containerd[1465]: time="2025-03-20T18:08:21.774862706Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 20 18:08:22.381786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109162434.mount: Deactivated successfully. Mar 20 18:08:23.608077 containerd[1465]: time="2025-03-20T18:08:23.608019981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:23.608484 containerd[1465]: time="2025-03-20T18:08:23.608402513Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 20 18:08:23.609260 containerd[1465]: time="2025-03-20T18:08:23.609227833Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:23.612246 containerd[1465]: time="2025-03-20T18:08:23.612200440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:23.612866 containerd[1465]: time="2025-03-20T18:08:23.612807583Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 1.83790194s" Mar 20 18:08:23.612866 containerd[1465]: time="2025-03-20T18:08:23.612840929Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 20 18:08:23.613625 containerd[1465]: time="2025-03-20T18:08:23.613567699Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 20 18:08:24.419950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 18:08:24.421753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:08:24.542195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:24.545414 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 18:08:24.661051 kubelet[1949]: E0320 18:08:24.660996 1949 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 18:08:24.664837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 18:08:24.665448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 18:08:24.666476 systemd[1]: kubelet.service: Consumed 134ms CPU time, 96.8M memory peak. Mar 20 18:08:24.978111 containerd[1465]: time="2025-03-20T18:08:24.978067108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:24.979041 containerd[1465]: time="2025-03-20T18:08:24.978845506Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 20 18:08:24.979776 containerd[1465]: time="2025-03-20T18:08:24.979732957Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:24.982894 containerd[1465]: time="2025-03-20T18:08:24.982847073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:24.984541 containerd[1465]: time="2025-03-20T18:08:24.984160603Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.370566039s" Mar 20 18:08:24.984541 containerd[1465]: time="2025-03-20T18:08:24.984194488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 20 18:08:24.984978 containerd[1465]: time="2025-03-20T18:08:24.984955683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 20 18:08:26.444149 containerd[1465]: time="2025-03-20T18:08:26.444103108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:26.445176 containerd[1465]: time="2025-03-20T18:08:26.444874594Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 20 18:08:26.445547 containerd[1465]: time="2025-03-20T18:08:26.445499612Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:26.447956 containerd[1465]: time="2025-03-20T18:08:26.447926650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:26.449034 containerd[1465]: time="2025-03-20T18:08:26.448994163Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.464009991s" Mar 20 18:08:26.449034 containerd[1465]: time="2025-03-20T18:08:26.449026122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 20 18:08:26.449461 containerd[1465]: time="2025-03-20T18:08:26.449415418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 20 18:08:27.335241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703229.mount: Deactivated successfully. Mar 20 18:08:27.691103 containerd[1465]: time="2025-03-20T18:08:27.690967766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:27.691631 containerd[1465]: time="2025-03-20T18:08:27.691543892Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 20 18:08:27.692242 containerd[1465]: time="2025-03-20T18:08:27.692194502Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:27.693765 containerd[1465]: time="2025-03-20T18:08:27.693734465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:27.694471 containerd[1465]: time="2025-03-20T18:08:27.694435962Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.24497582s" Mar 20 18:08:27.694505 containerd[1465]: time="2025-03-20T18:08:27.694474568Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 20 18:08:27.694946 containerd[1465]: time="2025-03-20T18:08:27.694905901Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 20 18:08:28.277067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066905598.mount: Deactivated successfully. Mar 20 18:08:29.000007 containerd[1465]: time="2025-03-20T18:08:28.999939239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:29.000973 containerd[1465]: time="2025-03-20T18:08:29.000907895Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 20 18:08:29.001879 containerd[1465]: time="2025-03-20T18:08:29.001840009Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:29.006205 containerd[1465]: time="2025-03-20T18:08:29.004822893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:29.006205 containerd[1465]: time="2025-03-20T18:08:29.005831303Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.310889049s" Mar 20 18:08:29.006205 containerd[1465]: time="2025-03-20T18:08:29.005859293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 20 18:08:29.006596 containerd[1465]: time="2025-03-20T18:08:29.006565866Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 20 18:08:29.412831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485425491.mount: Deactivated successfully. Mar 20 18:08:29.416584 containerd[1465]: time="2025-03-20T18:08:29.416540325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 18:08:29.418010 containerd[1465]: time="2025-03-20T18:08:29.417936187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 20 18:08:29.418927 containerd[1465]: time="2025-03-20T18:08:29.418884687Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 18:08:29.421113 containerd[1465]: time="2025-03-20T18:08:29.421084606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 18:08:29.422170 containerd[1465]: time="2025-03-20T18:08:29.422123974Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 415.52679ms" Mar 20 18:08:29.422170 containerd[1465]: time="2025-03-20T18:08:29.422166160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 20 18:08:29.422694 containerd[1465]: time="2025-03-20T18:08:29.422659959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 20 18:08:29.863412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516265829.mount: Deactivated successfully. Mar 20 18:08:32.201556 containerd[1465]: time="2025-03-20T18:08:32.201507744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:32.202128 containerd[1465]: time="2025-03-20T18:08:32.202075900Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 20 18:08:32.203018 containerd[1465]: time="2025-03-20T18:08:32.202977857Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:32.205604 containerd[1465]: time="2025-03-20T18:08:32.205573383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:32.206739 containerd[1465]: time="2025-03-20T18:08:32.206700559Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.784004276s" Mar 20 18:08:32.206739 containerd[1465]: time="2025-03-20T18:08:32.206736981Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 20 18:08:34.775907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 18:08:34.777710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:08:34.886143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:34.900553 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 18:08:34.933296 kubelet[2106]: E0320 18:08:34.933234 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 18:08:34.935823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 18:08:34.935961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 18:08:34.936243 systemd[1]: kubelet.service: Consumed 124ms CPU time, 96.9M memory peak. Mar 20 18:08:38.893426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:38.893579 systemd[1]: kubelet.service: Consumed 124ms CPU time, 96.9M memory peak. Mar 20 18:08:38.895436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:08:38.914562 systemd[1]: Reload requested from client PID 2122 ('systemctl') (unit session-7.scope)... Mar 20 18:08:38.914577 systemd[1]: Reloading... Mar 20 18:08:38.987386 zram_generator::config[2166]: No configuration found. Mar 20 18:08:39.206798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 18:08:39.279735 systemd[1]: Reloading finished in 364 ms. Mar 20 18:08:39.318473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:39.321606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:08:39.321873 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 18:08:39.323360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:39.323401 systemd[1]: kubelet.service: Consumed 82ms CPU time, 82.5M memory peak. Mar 20 18:08:39.325117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:08:39.431125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:39.434365 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 18:08:39.468370 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 18:08:39.468370 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 18:08:39.468370 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 18:08:39.468724 kubelet[2213]: I0320 18:08:39.468487 2213 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 18:08:40.281270 kubelet[2213]: I0320 18:08:40.281224 2213 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 18:08:40.281270 kubelet[2213]: I0320 18:08:40.281260 2213 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 18:08:40.281543 kubelet[2213]: I0320 18:08:40.281515 2213 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 18:08:40.328580 kubelet[2213]: E0320 18:08:40.327223 2213 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:08:40.328858 kubelet[2213]: I0320 18:08:40.328829 2213 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 18:08:40.335313 kubelet[2213]: I0320 18:08:40.335272 2213 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 18:08:40.338669 kubelet[2213]: I0320 18:08:40.338647 2213 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 18:08:40.340786 kubelet[2213]: I0320 18:08:40.340767 2213 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 18:08:40.340916 kubelet[2213]: I0320 18:08:40.340879 2213 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 18:08:40.341062 kubelet[2213]: I0320 18:08:40.340905 2213 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 18:08:40.341149 kubelet[2213]: I0320 18:08:40.341118 2213 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 18:08:40.341149 kubelet[2213]: I0320 18:08:40.341127 2213 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 18:08:40.341332 kubelet[2213]: I0320 18:08:40.341306 2213 state_mem.go:36] "Initialized new in-memory state store" Mar 20 18:08:40.343078 kubelet[2213]: I0320 18:08:40.342950 2213 kubelet.go:408] "Attempting to sync node with API server" Mar 20 18:08:40.343078 kubelet[2213]: I0320 18:08:40.342978 2213 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 18:08:40.343078 kubelet[2213]: I0320 18:08:40.343073 2213 kubelet.go:314] "Adding apiserver pod source" Mar 20 18:08:40.343078 kubelet[2213]: I0320 18:08:40.343085 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 18:08:40.345067 kubelet[2213]: I0320 18:08:40.344856 2213 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 18:08:40.346621 kubelet[2213]: I0320 18:08:40.346544 2213 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 18:08:40.346804 kubelet[2213]: W0320 18:08:40.346728 2213 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 20 18:08:40.346804 kubelet[2213]: E0320 18:08:40.346784 2213 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:08:40.347477 kubelet[2213]: W0320 18:08:40.347399 2213 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 20 18:08:40.347477 kubelet[2213]: E0320 18:08:40.347445 2213 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:08:40.347477 kubelet[2213]: W0320 18:08:40.347468 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 18:08:40.348227 kubelet[2213]: I0320 18:08:40.348119 2213 server.go:1269] "Started kubelet" Mar 20 18:08:40.349244 kubelet[2213]: I0320 18:08:40.349135 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 18:08:40.349668 kubelet[2213]: I0320 18:08:40.349458 2213 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 18:08:40.349668 kubelet[2213]: I0320 18:08:40.349483 2213 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 18:08:40.350597 kubelet[2213]: I0320 18:08:40.350572 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 18:08:40.350832 kubelet[2213]: I0320 18:08:40.350806 2213 server.go:460] "Adding debug handlers to kubelet server" Mar 20 18:08:40.352569 kubelet[2213]: I0320 18:08:40.352527 2213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 18:08:40.353224 kubelet[2213]: E0320 18:08:40.353163 2213 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:08:40.353262 kubelet[2213]: I0320 18:08:40.353240 2213 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 18:08:40.353773 kubelet[2213]: E0320 18:08:40.353733 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Mar 20 18:08:40.354668 kubelet[2213]: W0320 18:08:40.353949 2213 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 20 18:08:40.354668 kubelet[2213]: E0320 18:08:40.354006 2213 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:08:40.354668 kubelet[2213]: I0320 18:08:40.354472 2213 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 18:08:40.354668 kubelet[2213]: I0320 18:08:40.354620 2213 reconciler.go:26] "Reconciler: start to sync state" Mar 20 18:08:40.355919 kubelet[2213]: E0320 18:08:40.355081 2213 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 18:08:40.357561 kubelet[2213]: I0320 18:08:40.357465 2213 factory.go:221] Registration of the containerd container factory successfully Mar 20 18:08:40.357561 kubelet[2213]: I0320 18:08:40.357480 2213 factory.go:221] Registration of the systemd container factory successfully Mar 20 18:08:40.357871 kubelet[2213]: E0320 18:08:40.354887 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e952b9a8e0fdd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 18:08:40.348094429 +0000 UTC m=+0.910957038,LastTimestamp:2025-03-20 18:08:40.348094429 +0000 UTC m=+0.910957038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 18:08:40.358038 kubelet[2213]: I0320 18:08:40.358008 2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 18:08:40.367142 kubelet[2213]: I0320 18:08:40.367110 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 18:08:40.368100 kubelet[2213]: I0320 18:08:40.368080 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 18:08:40.368327 kubelet[2213]: I0320 18:08:40.368195 2213 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 18:08:40.368663 kubelet[2213]: I0320 18:08:40.368417 2213 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 18:08:40.368663 kubelet[2213]: E0320 18:08:40.368459 2213 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 18:08:40.368988 kubelet[2213]: W0320 18:08:40.368943 2213 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 20 18:08:40.369047 kubelet[2213]: E0320 18:08:40.368997 2213 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:08:40.369071 kubelet[2213]: I0320 18:08:40.369053 2213 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 18:08:40.369071 kubelet[2213]: I0320 18:08:40.369060 2213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 18:08:40.369121 kubelet[2213]: I0320 18:08:40.369075 2213 state_mem.go:36] "Initialized new in-memory state store" Mar 20 18:08:40.430351 kubelet[2213]: I0320 18:08:40.430319 2213 policy_none.go:49] "None policy: Start" Mar 20 18:08:40.431109 kubelet[2213]: I0320 18:08:40.431092 2213 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 18:08:40.431154 kubelet[2213]: I0320 18:08:40.431121 2213 state_mem.go:35] "Initializing new in-memory state store" Mar 20 18:08:40.436704 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 18:08:40.447738 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 18:08:40.453705 kubelet[2213]: E0320 18:08:40.453677 2213 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:08:40.460537 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 18:08:40.461702 kubelet[2213]: I0320 18:08:40.461669 2213 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 18:08:40.461862 kubelet[2213]: I0320 18:08:40.461837 2213 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 18:08:40.461891 kubelet[2213]: I0320 18:08:40.461855 2213 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 18:08:40.462205 kubelet[2213]: I0320 18:08:40.462079 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 18:08:40.463145 kubelet[2213]: E0320 18:08:40.463125 2213 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 18:08:40.476689 systemd[1]: Created slice kubepods-burstable-pod939aad522057d33f8280a69aef0be121.slice - libcontainer container kubepods-burstable-pod939aad522057d33f8280a69aef0be121.slice. Mar 20 18:08:40.489719 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 20 18:08:40.492666 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 20 18:08:40.554581 kubelet[2213]: E0320 18:08:40.554476 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Mar 20 18:08:40.555987 kubelet[2213]: I0320 18:08:40.555963 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:40.556193 kubelet[2213]: I0320 18:08:40.556002 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:40.556193 kubelet[2213]: I0320 18:08:40.556023 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:40.556345 kubelet[2213]: I0320 18:08:40.556316 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:40.556427 kubelet[2213]: I0320 18:08:40.556415 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 18:08:40.556505 kubelet[2213]: I0320 18:08:40.556492 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:08:40.556625 kubelet[2213]: I0320 18:08:40.556583 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:08:40.556625 kubelet[2213]: I0320 18:08:40.556606 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:08:40.556736 kubelet[2213]: I0320 18:08:40.556712 2213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:40.563755 kubelet[2213]: I0320 18:08:40.563737 2213 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 18:08:40.564095 kubelet[2213]: E0320 18:08:40.564074 2213 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Mar 20 18:08:40.766115 kubelet[2213]: I0320 18:08:40.766086 2213 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 18:08:40.766523 kubelet[2213]: E0320 18:08:40.766485 2213 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Mar 20 18:08:40.789314 containerd[1465]: time="2025-03-20T18:08:40.789242936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:939aad522057d33f8280a69aef0be121,Namespace:kube-system,Attempt:0,}" Mar 20 18:08:40.792218 containerd[1465]: time="2025-03-20T18:08:40.792179795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 20 18:08:40.794971 containerd[1465]: time="2025-03-20T18:08:40.794912456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 20 18:08:40.813590 containerd[1465]: time="2025-03-20T18:08:40.813508497Z" level=info msg="connecting to shim 1810de36bbf6f7d2108c7298515a963b43a7e5bd16cd21bcfd74276343f342c6" address="unix:///run/containerd/s/7528ff872ed588473ba2ca5242ead4ebe3bfb9baf9263aa6ef6ca038b561c75f" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:08:40.824999 containerd[1465]: time="2025-03-20T18:08:40.824573860Z" level=info msg="connecting to shim 318555e3f6ceb8e5f9db63a69035c4095d83b5108c00cf1a9fc4f02d37863c18" address="unix:///run/containerd/s/a32f8cbe0c512d9a26cc96c964365a15872142916cad6c1fab50a797d18bc874" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:08:40.826154 containerd[1465]: time="2025-03-20T18:08:40.825943372Z" level=info msg="connecting to shim 160cf2f313b1712b50e84ee07abf9ab132e6791ebbe6f075739a4b299e281367" address="unix:///run/containerd/s/2d5b370c6575270640a56131cd8a3ecfe43866b04d62298b1d243a743e6651dc" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:08:40.851535 systemd[1]: Started cri-containerd-1810de36bbf6f7d2108c7298515a963b43a7e5bd16cd21bcfd74276343f342c6.scope - libcontainer container 1810de36bbf6f7d2108c7298515a963b43a7e5bd16cd21bcfd74276343f342c6. Mar 20 18:08:40.852666 systemd[1]: Started cri-containerd-318555e3f6ceb8e5f9db63a69035c4095d83b5108c00cf1a9fc4f02d37863c18.scope - libcontainer container 318555e3f6ceb8e5f9db63a69035c4095d83b5108c00cf1a9fc4f02d37863c18. Mar 20 18:08:40.856317 systemd[1]: Started cri-containerd-160cf2f313b1712b50e84ee07abf9ab132e6791ebbe6f075739a4b299e281367.scope - libcontainer container 160cf2f313b1712b50e84ee07abf9ab132e6791ebbe6f075739a4b299e281367. Mar 20 18:08:40.894718 containerd[1465]: time="2025-03-20T18:08:40.894632438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:939aad522057d33f8280a69aef0be121,Namespace:kube-system,Attempt:0,} returns sandbox id \"1810de36bbf6f7d2108c7298515a963b43a7e5bd16cd21bcfd74276343f342c6\"" Mar 20 18:08:40.896069 containerd[1465]: time="2025-03-20T18:08:40.895975175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"318555e3f6ceb8e5f9db63a69035c4095d83b5108c00cf1a9fc4f02d37863c18\"" Mar 20 18:08:40.898173 containerd[1465]: time="2025-03-20T18:08:40.898149474Z" level=info msg="CreateContainer within sandbox \"1810de36bbf6f7d2108c7298515a963b43a7e5bd16cd21bcfd74276343f342c6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 18:08:40.898732 containerd[1465]: time="2025-03-20T18:08:40.898406862Z" level=info msg="CreateContainer within sandbox \"318555e3f6ceb8e5f9db63a69035c4095d83b5108c00cf1a9fc4f02d37863c18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 18:08:40.898732 containerd[1465]: time="2025-03-20T18:08:40.898464656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"160cf2f313b1712b50e84ee07abf9ab132e6791ebbe6f075739a4b299e281367\"" Mar 20 18:08:40.904802 containerd[1465]: time="2025-03-20T18:08:40.904757697Z" level=info msg="Container 78edefe28a95a33ddbfa0dc25a8d2788a92fb62d0722c9e58126e352649f141d: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:08:40.913169 containerd[1465]: time="2025-03-20T18:08:40.913142389Z" level=info msg="CreateContainer within sandbox \"160cf2f313b1712b50e84ee07abf9ab132e6791ebbe6f075739a4b299e281367\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 18:08:40.920099 containerd[1465]: time="2025-03-20T18:08:40.920062673Z" level=info msg="CreateContainer within sandbox \"1810de36bbf6f7d2108c7298515a963b43a7e5bd16cd21bcfd74276343f342c6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78edefe28a95a33ddbfa0dc25a8d2788a92fb62d0722c9e58126e352649f141d\"" Mar 20 18:08:40.920718 containerd[1465]: time="2025-03-20T18:08:40.920644370Z" level=info msg="StartContainer for \"78edefe28a95a33ddbfa0dc25a8d2788a92fb62d0722c9e58126e352649f141d\"" Mar 20 18:08:40.920864 containerd[1465]: time="2025-03-20T18:08:40.920841164Z" level=info msg="Container b83e2681258c2b187bbfd3a9b055cfa20b535286da497fa7e9f1d2749084be05: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:08:40.921880 containerd[1465]: time="2025-03-20T18:08:40.921845785Z" level=info msg="connecting to shim 78edefe28a95a33ddbfa0dc25a8d2788a92fb62d0722c9e58126e352649f141d" address="unix:///run/containerd/s/7528ff872ed588473ba2ca5242ead4ebe3bfb9baf9263aa6ef6ca038b561c75f" protocol=ttrpc version=3 Mar 20 18:08:40.925213 containerd[1465]: time="2025-03-20T18:08:40.924362321Z" level=info msg="Container 8abd19b4051e4646ad233f39df6bd241ab4711b8568598c1f86fd0512c0d7b6b: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:08:40.928793 containerd[1465]: time="2025-03-20T18:08:40.928762588Z" level=info msg="CreateContainer within sandbox \"318555e3f6ceb8e5f9db63a69035c4095d83b5108c00cf1a9fc4f02d37863c18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b83e2681258c2b187bbfd3a9b055cfa20b535286da497fa7e9f1d2749084be05\"" Mar 20 18:08:40.929193 containerd[1465]: time="2025-03-20T18:08:40.929170624Z" level=info msg="StartContainer for \"b83e2681258c2b187bbfd3a9b055cfa20b535286da497fa7e9f1d2749084be05\"" Mar 20 18:08:40.930123 containerd[1465]: time="2025-03-20T18:08:40.930098761Z" level=info msg="connecting to shim b83e2681258c2b187bbfd3a9b055cfa20b535286da497fa7e9f1d2749084be05" address="unix:///run/containerd/s/a32f8cbe0c512d9a26cc96c964365a15872142916cad6c1fab50a797d18bc874" protocol=ttrpc version=3 Mar 20 18:08:40.932841 containerd[1465]: time="2025-03-20T18:08:40.932806367Z" level=info msg="CreateContainer within sandbox \"160cf2f313b1712b50e84ee07abf9ab132e6791ebbe6f075739a4b299e281367\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8abd19b4051e4646ad233f39df6bd241ab4711b8568598c1f86fd0512c0d7b6b\"" Mar 20 18:08:40.933228 containerd[1465]: time="2025-03-20T18:08:40.933162013Z" level=info msg="StartContainer for \"8abd19b4051e4646ad233f39df6bd241ab4711b8568598c1f86fd0512c0d7b6b\"" Mar 20 18:08:40.934199 containerd[1465]: time="2025-03-20T18:08:40.934174959Z" level=info msg="connecting to shim 8abd19b4051e4646ad233f39df6bd241ab4711b8568598c1f86fd0512c0d7b6b" address="unix:///run/containerd/s/2d5b370c6575270640a56131cd8a3ecfe43866b04d62298b1d243a743e6651dc" protocol=ttrpc version=3 Mar 20 18:08:40.943421 systemd[1]: Started cri-containerd-78edefe28a95a33ddbfa0dc25a8d2788a92fb62d0722c9e58126e352649f141d.scope - libcontainer container 78edefe28a95a33ddbfa0dc25a8d2788a92fb62d0722c9e58126e352649f141d. Mar 20 18:08:40.947647 systemd[1]: Started cri-containerd-b83e2681258c2b187bbfd3a9b055cfa20b535286da497fa7e9f1d2749084be05.scope - libcontainer container b83e2681258c2b187bbfd3a9b055cfa20b535286da497fa7e9f1d2749084be05. Mar 20 18:08:40.951861 systemd[1]: Started cri-containerd-8abd19b4051e4646ad233f39df6bd241ab4711b8568598c1f86fd0512c0d7b6b.scope - libcontainer container 8abd19b4051e4646ad233f39df6bd241ab4711b8568598c1f86fd0512c0d7b6b. Mar 20 18:08:40.955625 kubelet[2213]: E0320 18:08:40.955565 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Mar 20 18:08:40.986746 containerd[1465]: time="2025-03-20T18:08:40.986647282Z" level=info msg="StartContainer for \"78edefe28a95a33ddbfa0dc25a8d2788a92fb62d0722c9e58126e352649f141d\" returns successfully" Mar 20 18:08:40.990710 containerd[1465]: time="2025-03-20T18:08:40.989004446Z" level=info msg="StartContainer for \"b83e2681258c2b187bbfd3a9b055cfa20b535286da497fa7e9f1d2749084be05\" returns successfully" Mar 20 18:08:40.999104 containerd[1465]: time="2025-03-20T18:08:40.999020602Z" level=info msg="StartContainer for \"8abd19b4051e4646ad233f39df6bd241ab4711b8568598c1f86fd0512c0d7b6b\" returns successfully" Mar 20 18:08:41.168806 kubelet[2213]: I0320 18:08:41.168121 2213 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 18:08:42.566211 kubelet[2213]: E0320 18:08:42.566180 2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 18:08:42.696189 kubelet[2213]: I0320 18:08:42.696150 2213 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 18:08:42.696189 kubelet[2213]: E0320 18:08:42.696192 2213 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 20 18:08:43.346832 kubelet[2213]: I0320 18:08:43.346783 2213 apiserver.go:52] "Watching apiserver" Mar 20 18:08:43.353435 kubelet[2213]: I0320 18:08:43.353389 2213 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 18:08:43.379881 kubelet[2213]: E0320 18:08:43.379848 2213 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:44.407454 systemd[1]: Reload requested from client PID 2481 ('systemctl') (unit session-7.scope)... Mar 20 18:08:44.407474 systemd[1]: Reloading... Mar 20 18:08:44.489319 zram_generator::config[2525]: No configuration found. Mar 20 18:08:44.578627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 18:08:44.666842 systemd[1]: Reloading finished in 259 ms. Mar 20 18:08:44.691046 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:08:44.706235 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 18:08:44.706566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:44.706627 systemd[1]: kubelet.service: Consumed 1.275s CPU time, 117.2M memory peak. Mar 20 18:08:44.708455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:08:44.839691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:08:44.843815 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 18:08:44.887219 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 18:08:44.887219 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 18:08:44.887219 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 18:08:44.887570 kubelet[2567]: I0320 18:08:44.887271 2567 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 18:08:44.893460 kubelet[2567]: I0320 18:08:44.893338 2567 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 18:08:44.893460 kubelet[2567]: I0320 18:08:44.893379 2567 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 18:08:44.893650 kubelet[2567]: I0320 18:08:44.893587 2567 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 18:08:44.894915 kubelet[2567]: I0320 18:08:44.894894 2567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 18:08:44.897049 kubelet[2567]: I0320 18:08:44.896905 2567 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 18:08:44.900424 kubelet[2567]: I0320 18:08:44.900401 2567 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 18:08:44.902688 kubelet[2567]: I0320 18:08:44.902661 2567 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 18:08:44.902777 kubelet[2567]: I0320 18:08:44.902763 2567 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 18:08:44.902896 kubelet[2567]: I0320 18:08:44.902872 2567 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 18:08:44.903066 kubelet[2567]: I0320 18:08:44.902896 2567 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 18:08:44.903149 kubelet[2567]: I0320 18:08:44.903067 2567 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 18:08:44.903149 kubelet[2567]: I0320 18:08:44.903076 2567 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 18:08:44.903149 kubelet[2567]: I0320 18:08:44.903102 2567 state_mem.go:36] "Initialized new in-memory state store" Mar 20 18:08:44.903223 kubelet[2567]: I0320 18:08:44.903196 2567 kubelet.go:408] "Attempting to sync node with API server" Mar 20 18:08:44.903223 kubelet[2567]: I0320 18:08:44.903207 2567 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 18:08:44.903262 kubelet[2567]: I0320 18:08:44.903226 2567 kubelet.go:314] "Adding apiserver pod source" Mar 20 18:08:44.903262 kubelet[2567]: I0320 18:08:44.903235 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 18:08:44.904024 kubelet[2567]: I0320 18:08:44.903711 2567 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 18:08:44.904213 kubelet[2567]: I0320 18:08:44.904140 2567 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 18:08:44.904533 kubelet[2567]: I0320 18:08:44.904516 2567 server.go:1269] "Started kubelet" Mar 20 18:08:44.906738 kubelet[2567]: I0320 18:08:44.906527 2567 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 18:08:44.906738 kubelet[2567]: I0320 18:08:44.906649 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 18:08:44.907002 kubelet[2567]: I0320 18:08:44.906979 2567 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 18:08:44.908960 kubelet[2567]: I0320 18:08:44.908930 2567 server.go:460] "Adding debug handlers to kubelet server" Mar 20 18:08:44.909121 kubelet[2567]: I0320 18:08:44.909094 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 18:08:44.912681 kubelet[2567]: I0320 18:08:44.912601 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 18:08:44.914837 kubelet[2567]: I0320 18:08:44.913654 2567 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 18:08:44.914837 kubelet[2567]: E0320 18:08:44.913835 2567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:08:44.914837 kubelet[2567]: I0320 18:08:44.914106 2567 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 18:08:44.914837 kubelet[2567]: I0320 18:08:44.914339 2567 reconciler.go:26] "Reconciler: start to sync state" Mar 20 18:08:44.915100 kubelet[2567]: E0320 18:08:44.915015 2567 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 18:08:44.915578 kubelet[2567]: I0320 18:08:44.915431 2567 factory.go:221] Registration of the systemd container factory successfully Mar 20 18:08:44.921373 kubelet[2567]: I0320 18:08:44.916910 2567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 18:08:44.921373 kubelet[2567]: I0320 18:08:44.918359 2567 factory.go:221] Registration of the containerd container factory successfully Mar 20 18:08:44.937879 kubelet[2567]: I0320 18:08:44.937814 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 18:08:44.940990 kubelet[2567]: I0320 18:08:44.940616 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 18:08:44.940990 kubelet[2567]: I0320 18:08:44.940640 2567 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 18:08:44.944384 kubelet[2567]: I0320 18:08:44.944021 2567 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 18:08:44.945321 kubelet[2567]: E0320 18:08:44.944896 2567 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 18:08:44.967072 kubelet[2567]: I0320 18:08:44.967049 2567 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 18:08:44.967072 kubelet[2567]: I0320 18:08:44.967066 2567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 18:08:44.967195 kubelet[2567]: I0320 18:08:44.967086 2567 state_mem.go:36] "Initialized new in-memory state store" Mar 20 18:08:44.967218 kubelet[2567]: I0320 18:08:44.967211 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 18:08:44.967238 kubelet[2567]: I0320 18:08:44.967221 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 18:08:44.967238 kubelet[2567]: I0320 18:08:44.967237 2567 policy_none.go:49] "None policy: Start" Mar 20 18:08:44.967843 kubelet[2567]: I0320 18:08:44.967821 2567 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 18:08:44.967843 kubelet[2567]: I0320 18:08:44.967842 2567 state_mem.go:35] "Initializing new in-memory state store" Mar 20 18:08:44.968014 kubelet[2567]: I0320 18:08:44.967996 2567 state_mem.go:75] "Updated machine memory state" Mar 20 18:08:44.971795 kubelet[2567]: I0320 18:08:44.971770 2567 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 18:08:44.972105 kubelet[2567]: I0320 18:08:44.971914 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 18:08:44.972105 kubelet[2567]: I0320 18:08:44.971931 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 18:08:44.972196 kubelet[2567]: I0320 18:08:44.972127 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 18:08:45.074077 kubelet[2567]: I0320 18:08:45.074041 2567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 18:08:45.079955 kubelet[2567]: I0320 18:08:45.079912 2567 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 20 18:08:45.080076 kubelet[2567]: I0320 18:08:45.079997 2567 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 18:08:45.114852 kubelet[2567]: I0320 18:08:45.114813 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:45.114852 kubelet[2567]: I0320 18:08:45.114846 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:45.115001 kubelet[2567]: I0320 18:08:45.114869 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:45.115001 kubelet[2567]: I0320 18:08:45.114885 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 18:08:45.115001 kubelet[2567]: I0320 18:08:45.114903 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:08:45.115001 kubelet[2567]: I0320 18:08:45.114928 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:08:45.115001 kubelet[2567]: I0320 18:08:45.114942 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:45.115108 kubelet[2567]: I0320 18:08:45.114959 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:08:45.115108 kubelet[2567]: I0320 18:08:45.114975 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:08:45.906976 kubelet[2567]: I0320 18:08:45.903775 2567 apiserver.go:52] "Watching apiserver" Mar 20 18:08:45.914747 kubelet[2567]: I0320 18:08:45.914706 2567 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 18:08:45.962673 kubelet[2567]: E0320 18:08:45.962232 2567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 18:08:45.986963 kubelet[2567]: I0320 18:08:45.986856 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.986839601 podStartE2EDuration="986.839601ms" podCreationTimestamp="2025-03-20 18:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:08:45.977549014 +0000 UTC m=+1.130507269" watchObservedRunningTime="2025-03-20 18:08:45.986839601 +0000 UTC m=+1.139797816" Mar 20 18:08:46.008367 kubelet[2567]: I0320 18:08:46.008312 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.008296811 podStartE2EDuration="1.008296811s" podCreationTimestamp="2025-03-20 18:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:08:45.987599312 +0000 UTC m=+1.140557487" watchObservedRunningTime="2025-03-20 18:08:46.008296811 +0000 UTC m=+1.161255026" Mar 20 18:08:46.008542 kubelet[2567]: I0320 18:08:46.008391 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.008386237 podStartE2EDuration="1.008386237s" podCreationTimestamp="2025-03-20 18:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:08:46.00808491 +0000 UTC m=+1.161043125" watchObservedRunningTime="2025-03-20 18:08:46.008386237 +0000 UTC m=+1.161344452" Mar 20 18:08:49.335207 sudo[1661]: pam_unix(sudo:session): session closed for user root Mar 20 18:08:49.336369 sshd[1660]: Connection closed by 10.0.0.1 port 57684 Mar 20 18:08:49.336832 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Mar 20 18:08:49.340144 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:57684.service: Deactivated successfully. Mar 20 18:08:49.342132 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 18:08:49.342317 systemd[1]: session-7.scope: Consumed 8.022s CPU time, 225.7M memory peak. Mar 20 18:08:49.343325 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Mar 20 18:08:49.344535 systemd-logind[1442]: Removed session 7. Mar 20 18:08:50.520887 kubelet[2567]: I0320 18:08:50.520792 2567 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 18:08:50.521243 containerd[1465]: time="2025-03-20T18:08:50.521095366Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 18:08:50.521597 kubelet[2567]: I0320 18:08:50.521242 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 18:08:51.149921 systemd[1]: Created slice kubepods-besteffort-podcc7b018e_7e67_4122_a298_4d464ec4f7ce.slice - libcontainer container kubepods-besteffort-podcc7b018e_7e67_4122_a298_4d464ec4f7ce.slice. Mar 20 18:08:51.249763 kubelet[2567]: I0320 18:08:51.249724 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc7b018e-7e67-4122-a298-4d464ec4f7ce-xtables-lock\") pod \"kube-proxy-rkxzb\" (UID: \"cc7b018e-7e67-4122-a298-4d464ec4f7ce\") " pod="kube-system/kube-proxy-rkxzb" Mar 20 18:08:51.249763 kubelet[2567]: I0320 18:08:51.249767 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc7b018e-7e67-4122-a298-4d464ec4f7ce-lib-modules\") pod \"kube-proxy-rkxzb\" (UID: \"cc7b018e-7e67-4122-a298-4d464ec4f7ce\") " pod="kube-system/kube-proxy-rkxzb" Mar 20 18:08:51.249949 kubelet[2567]: I0320 18:08:51.249787 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cc7b018e-7e67-4122-a298-4d464ec4f7ce-kube-proxy\") pod \"kube-proxy-rkxzb\" (UID: \"cc7b018e-7e67-4122-a298-4d464ec4f7ce\") " pod="kube-system/kube-proxy-rkxzb" Mar 20 18:08:51.249949 kubelet[2567]: I0320 18:08:51.249804 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x67jt\" (UniqueName: \"kubernetes.io/projected/cc7b018e-7e67-4122-a298-4d464ec4f7ce-kube-api-access-x67jt\") pod \"kube-proxy-rkxzb\" (UID: \"cc7b018e-7e67-4122-a298-4d464ec4f7ce\") " pod="kube-system/kube-proxy-rkxzb" Mar 20 18:08:51.359031 kubelet[2567]: E0320 18:08:51.358986 2567 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 20 18:08:51.359031 kubelet[2567]: E0320 18:08:51.359023 2567 projected.go:194] Error preparing data for projected volume kube-api-access-x67jt for pod kube-system/kube-proxy-rkxzb: configmap "kube-root-ca.crt" not found Mar 20 18:08:51.359179 kubelet[2567]: E0320 18:08:51.359086 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7b018e-7e67-4122-a298-4d464ec4f7ce-kube-api-access-x67jt podName:cc7b018e-7e67-4122-a298-4d464ec4f7ce nodeName:}" failed. No retries permitted until 2025-03-20 18:08:51.859068052 +0000 UTC m=+7.012026267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x67jt" (UniqueName: "kubernetes.io/projected/cc7b018e-7e67-4122-a298-4d464ec4f7ce-kube-api-access-x67jt") pod "kube-proxy-rkxzb" (UID: "cc7b018e-7e67-4122-a298-4d464ec4f7ce") : configmap "kube-root-ca.crt" not found Mar 20 18:08:51.560004 systemd[1]: Created slice kubepods-besteffort-poda8cc1f94_33c8_46d4_8e0e_3ce70488d5a2.slice - libcontainer container kubepods-besteffort-poda8cc1f94_33c8_46d4_8e0e_3ce70488d5a2.slice. Mar 20 18:08:51.651831 kubelet[2567]: I0320 18:08:51.651790 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442ks\" (UniqueName: \"kubernetes.io/projected/a8cc1f94-33c8-46d4-8e0e-3ce70488d5a2-kube-api-access-442ks\") pod \"tigera-operator-64ff5465b7-f8422\" (UID: \"a8cc1f94-33c8-46d4-8e0e-3ce70488d5a2\") " pod="tigera-operator/tigera-operator-64ff5465b7-f8422" Mar 20 18:08:51.651831 kubelet[2567]: I0320 18:08:51.651831 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a8cc1f94-33c8-46d4-8e0e-3ce70488d5a2-var-lib-calico\") pod \"tigera-operator-64ff5465b7-f8422\" (UID: \"a8cc1f94-33c8-46d4-8e0e-3ce70488d5a2\") " pod="tigera-operator/tigera-operator-64ff5465b7-f8422" Mar 20 18:08:51.863638 containerd[1465]: time="2025-03-20T18:08:51.863559035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-f8422,Uid:a8cc1f94-33c8-46d4-8e0e-3ce70488d5a2,Namespace:tigera-operator,Attempt:0,}" Mar 20 18:08:51.879572 containerd[1465]: time="2025-03-20T18:08:51.879531588Z" level=info msg="connecting to shim 8e5fd71f99e01259be703676f6c23d8ed0f1819cd732f42be8ed68f65bc2001b" address="unix:///run/containerd/s/abaf255058af6df434e50ab4f8c332df6ec9f4ce57572b0dadc9ddeaaa128c9d" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:08:51.900470 systemd[1]: Started cri-containerd-8e5fd71f99e01259be703676f6c23d8ed0f1819cd732f42be8ed68f65bc2001b.scope - libcontainer container 8e5fd71f99e01259be703676f6c23d8ed0f1819cd732f42be8ed68f65bc2001b. Mar 20 18:08:51.926773 containerd[1465]: time="2025-03-20T18:08:51.926686082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-f8422,Uid:a8cc1f94-33c8-46d4-8e0e-3ce70488d5a2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8e5fd71f99e01259be703676f6c23d8ed0f1819cd732f42be8ed68f65bc2001b\"" Mar 20 18:08:51.930459 containerd[1465]: time="2025-03-20T18:08:51.930349159Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 20 18:08:52.060104 containerd[1465]: time="2025-03-20T18:08:52.060070207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkxzb,Uid:cc7b018e-7e67-4122-a298-4d464ec4f7ce,Namespace:kube-system,Attempt:0,}" Mar 20 18:08:52.074896 containerd[1465]: time="2025-03-20T18:08:52.074745910Z" level=info msg="connecting to shim b25680afe849a3a22a9dae76aeac616ff167724525141ccdbcccbe3b4a511621" address="unix:///run/containerd/s/e73b4f6111a2913577899a32ecd7934066ad4671e0f8a14602f4a65514e18c47" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:08:52.099423 systemd[1]: Started cri-containerd-b25680afe849a3a22a9dae76aeac616ff167724525141ccdbcccbe3b4a511621.scope - libcontainer container b25680afe849a3a22a9dae76aeac616ff167724525141ccdbcccbe3b4a511621. Mar 20 18:08:52.119537 containerd[1465]: time="2025-03-20T18:08:52.119453479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkxzb,Uid:cc7b018e-7e67-4122-a298-4d464ec4f7ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"b25680afe849a3a22a9dae76aeac616ff167724525141ccdbcccbe3b4a511621\"" Mar 20 18:08:52.122040 containerd[1465]: time="2025-03-20T18:08:52.122006885Z" level=info msg="CreateContainer within sandbox \"b25680afe849a3a22a9dae76aeac616ff167724525141ccdbcccbe3b4a511621\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 18:08:52.128976 containerd[1465]: time="2025-03-20T18:08:52.128939113Z" level=info msg="Container 4a119ed9c1e2935c00546d781acba166d3f82b6bd8c4ef0e2506173f178001ff: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:08:52.135028 containerd[1465]: time="2025-03-20T18:08:52.134989439Z" level=info msg="CreateContainer within sandbox \"b25680afe849a3a22a9dae76aeac616ff167724525141ccdbcccbe3b4a511621\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a119ed9c1e2935c00546d781acba166d3f82b6bd8c4ef0e2506173f178001ff\"" Mar 20 18:08:52.135666 containerd[1465]: time="2025-03-20T18:08:52.135643253Z" level=info msg="StartContainer for \"4a119ed9c1e2935c00546d781acba166d3f82b6bd8c4ef0e2506173f178001ff\"" Mar 20 18:08:52.136903 containerd[1465]: time="2025-03-20T18:08:52.136881469Z" level=info msg="connecting to shim 4a119ed9c1e2935c00546d781acba166d3f82b6bd8c4ef0e2506173f178001ff" address="unix:///run/containerd/s/e73b4f6111a2913577899a32ecd7934066ad4671e0f8a14602f4a65514e18c47" protocol=ttrpc version=3 Mar 20 18:08:52.156502 systemd[1]: Started cri-containerd-4a119ed9c1e2935c00546d781acba166d3f82b6bd8c4ef0e2506173f178001ff.scope - libcontainer container 4a119ed9c1e2935c00546d781acba166d3f82b6bd8c4ef0e2506173f178001ff. Mar 20 18:08:52.188783 containerd[1465]: time="2025-03-20T18:08:52.188751473Z" level=info msg="StartContainer for \"4a119ed9c1e2935c00546d781acba166d3f82b6bd8c4ef0e2506173f178001ff\" returns successfully" Mar 20 18:08:54.875940 kubelet[2567]: I0320 18:08:54.875888 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rkxzb" podStartSLOduration=3.87587241 podStartE2EDuration="3.87587241s" podCreationTimestamp="2025-03-20 18:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:08:52.977038242 +0000 UTC m=+8.129996417" watchObservedRunningTime="2025-03-20 18:08:54.87587241 +0000 UTC m=+10.028830625" Mar 20 18:08:56.959298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138489730.mount: Deactivated successfully. Mar 20 18:08:57.126493 update_engine[1445]: I20250320 18:08:57.126385 1445 update_attempter.cc:509] Updating boot flags... Mar 20 18:08:57.158474 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2925) Mar 20 18:08:57.204345 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2924) Mar 20 18:08:57.326139 containerd[1465]: time="2025-03-20T18:08:57.326094875Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:57.326958 containerd[1465]: time="2025-03-20T18:08:57.326910565Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 20 18:08:57.327660 containerd[1465]: time="2025-03-20T18:08:57.327630319Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:57.336594 containerd[1465]: time="2025-03-20T18:08:57.336361303Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:08:57.337687 containerd[1465]: time="2025-03-20T18:08:57.337655548Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 5.407192084s" Mar 20 18:08:57.337811 containerd[1465]: time="2025-03-20T18:08:57.337792129Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 20 18:08:57.342572 containerd[1465]: time="2025-03-20T18:08:57.342546283Z" level=info msg="CreateContainer within sandbox \"8e5fd71f99e01259be703676f6c23d8ed0f1819cd732f42be8ed68f65bc2001b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 20 18:08:57.350326 containerd[1465]: time="2025-03-20T18:08:57.349745944Z" level=info msg="Container 26f3f5c79d1a354a65bad03a06154dfccc44943427ebebf8152cb61a720eec32: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:08:57.358093 containerd[1465]: time="2025-03-20T18:08:57.358045060Z" level=info msg="CreateContainer within sandbox \"8e5fd71f99e01259be703676f6c23d8ed0f1819cd732f42be8ed68f65bc2001b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"26f3f5c79d1a354a65bad03a06154dfccc44943427ebebf8152cb61a720eec32\"" Mar 20 18:08:57.358558 containerd[1465]: time="2025-03-20T18:08:57.358524055Z" level=info msg="StartContainer for \"26f3f5c79d1a354a65bad03a06154dfccc44943427ebebf8152cb61a720eec32\"" Mar 20 18:08:57.359524 containerd[1465]: time="2025-03-20T18:08:57.359487208Z" level=info msg="connecting to shim 26f3f5c79d1a354a65bad03a06154dfccc44943427ebebf8152cb61a720eec32" address="unix:///run/containerd/s/abaf255058af6df434e50ab4f8c332df6ec9f4ce57572b0dadc9ddeaaa128c9d" protocol=ttrpc version=3 Mar 20 18:08:57.397454 systemd[1]: Started cri-containerd-26f3f5c79d1a354a65bad03a06154dfccc44943427ebebf8152cb61a720eec32.scope - libcontainer container 26f3f5c79d1a354a65bad03a06154dfccc44943427ebebf8152cb61a720eec32. Mar 20 18:08:57.453329 containerd[1465]: time="2025-03-20T18:08:57.451835245Z" level=info msg="StartContainer for \"26f3f5c79d1a354a65bad03a06154dfccc44943427ebebf8152cb61a720eec32\" returns successfully" Mar 20 18:08:57.993726 kubelet[2567]: I0320 18:08:57.993661 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-f8422" podStartSLOduration=1.580805168 podStartE2EDuration="6.993645203s" podCreationTimestamp="2025-03-20 18:08:51 +0000 UTC" firstStartedPulling="2025-03-20 18:08:51.928171125 +0000 UTC m=+7.081129340" lastFinishedPulling="2025-03-20 18:08:57.34101116 +0000 UTC m=+12.493969375" observedRunningTime="2025-03-20 18:08:57.99343705 +0000 UTC m=+13.146395225" watchObservedRunningTime="2025-03-20 18:08:57.993645203 +0000 UTC m=+13.146603418" Mar 20 18:09:01.329405 systemd[1]: Created slice kubepods-besteffort-pod3bdceaf2_5c40_4a04_9462_f3c3fd7ab0be.slice - libcontainer container kubepods-besteffort-pod3bdceaf2_5c40_4a04_9462_f3c3fd7ab0be.slice. Mar 20 18:09:01.382790 systemd[1]: Created slice kubepods-besteffort-podfdf181d2_988e_4e88_8ff1_a48cda2e5b5b.slice - libcontainer container kubepods-besteffort-podfdf181d2_988e_4e88_8ff1_a48cda2e5b5b.slice. Mar 20 18:09:01.413045 kubelet[2567]: I0320 18:09:01.413005 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl4r9\" (UniqueName: \"kubernetes.io/projected/3bdceaf2-5c40-4a04-9462-f3c3fd7ab0be-kube-api-access-kl4r9\") pod \"calico-typha-69d456659c-9tf92\" (UID: \"3bdceaf2-5c40-4a04-9462-f3c3fd7ab0be\") " pod="calico-system/calico-typha-69d456659c-9tf92" Mar 20 18:09:01.413045 kubelet[2567]: I0320 18:09:01.413050 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3bdceaf2-5c40-4a04-9462-f3c3fd7ab0be-typha-certs\") pod \"calico-typha-69d456659c-9tf92\" (UID: \"3bdceaf2-5c40-4a04-9462-f3c3fd7ab0be\") " pod="calico-system/calico-typha-69d456659c-9tf92" Mar 20 18:09:01.413414 kubelet[2567]: I0320 18:09:01.413071 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bdceaf2-5c40-4a04-9462-f3c3fd7ab0be-tigera-ca-bundle\") pod \"calico-typha-69d456659c-9tf92\" (UID: \"3bdceaf2-5c40-4a04-9462-f3c3fd7ab0be\") " pod="calico-system/calico-typha-69d456659c-9tf92" Mar 20 18:09:01.477321 kubelet[2567]: E0320 18:09:01.477234 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsfqz" podUID="4565d58f-9b5f-479d-bb63-738e37af8858" Mar 20 18:09:01.514252 kubelet[2567]: I0320 18:09:01.514133 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-var-run-calico\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514252 kubelet[2567]: I0320 18:09:01.514179 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-cni-log-dir\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514252 kubelet[2567]: I0320 18:09:01.514199 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-xtables-lock\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514252 kubelet[2567]: I0320 18:09:01.514226 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-flexvol-driver-host\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514252 kubelet[2567]: I0320 18:09:01.514245 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-policysync\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514489 kubelet[2567]: I0320 18:09:01.514261 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-lib-modules\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514489 kubelet[2567]: I0320 18:09:01.514294 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-tigera-ca-bundle\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514489 kubelet[2567]: I0320 18:09:01.514314 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-var-lib-calico\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514489 kubelet[2567]: I0320 18:09:01.514340 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9ss\" (UniqueName: \"kubernetes.io/projected/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-kube-api-access-bq9ss\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514489 kubelet[2567]: I0320 18:09:01.514356 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-cni-net-dir\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514586 kubelet[2567]: I0320 18:09:01.514371 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-node-certs\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.514586 kubelet[2567]: I0320 18:09:01.514394 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fdf181d2-988e-4e88-8ff1-a48cda2e5b5b-cni-bin-dir\") pod \"calico-node-9lfmn\" (UID: \"fdf181d2-988e-4e88-8ff1-a48cda2e5b5b\") " pod="calico-system/calico-node-9lfmn" Mar 20 18:09:01.614786 kubelet[2567]: I0320 18:09:01.614665 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4565d58f-9b5f-479d-bb63-738e37af8858-socket-dir\") pod \"csi-node-driver-bsfqz\" (UID: \"4565d58f-9b5f-479d-bb63-738e37af8858\") " pod="calico-system/csi-node-driver-bsfqz" Mar 20 18:09:01.614786 kubelet[2567]: I0320 18:09:01.614746 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4565d58f-9b5f-479d-bb63-738e37af8858-varrun\") pod \"csi-node-driver-bsfqz\" (UID: \"4565d58f-9b5f-479d-bb63-738e37af8858\") " pod="calico-system/csi-node-driver-bsfqz" Mar 20 18:09:01.614911 kubelet[2567]: I0320 18:09:01.614796 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4565d58f-9b5f-479d-bb63-738e37af8858-kubelet-dir\") pod \"csi-node-driver-bsfqz\" (UID: \"4565d58f-9b5f-479d-bb63-738e37af8858\") " pod="calico-system/csi-node-driver-bsfqz" Mar 20 18:09:01.614911 kubelet[2567]: I0320 18:09:01.614813 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4565d58f-9b5f-479d-bb63-738e37af8858-registration-dir\") pod \"csi-node-driver-bsfqz\" (UID: \"4565d58f-9b5f-479d-bb63-738e37af8858\") " pod="calico-system/csi-node-driver-bsfqz" Mar 20 18:09:01.614911 kubelet[2567]: I0320 18:09:01.614855 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7ns\" (UniqueName: \"kubernetes.io/projected/4565d58f-9b5f-479d-bb63-738e37af8858-kube-api-access-pv7ns\") pod \"csi-node-driver-bsfqz\" (UID: \"4565d58f-9b5f-479d-bb63-738e37af8858\") " pod="calico-system/csi-node-driver-bsfqz" Mar 20 18:09:01.616298 kubelet[2567]: E0320 18:09:01.616112 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.616298 kubelet[2567]: W0320 18:09:01.616133 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.616298 kubelet[2567]: E0320 18:09:01.616159 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.616571 kubelet[2567]: E0320 18:09:01.616511 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.616649 kubelet[2567]: W0320 18:09:01.616565 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.616649 kubelet[2567]: E0320 18:09:01.616628 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.617058 kubelet[2567]: E0320 18:09:01.616870 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.617058 kubelet[2567]: W0320 18:09:01.616884 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.617058 kubelet[2567]: E0320 18:09:01.616901 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.617176 kubelet[2567]: E0320 18:09:01.617102 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.617176 kubelet[2567]: W0320 18:09:01.617114 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.617894 kubelet[2567]: E0320 18:09:01.617242 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.617894 kubelet[2567]: E0320 18:09:01.617778 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.617894 kubelet[2567]: W0320 18:09:01.617789 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.617894 kubelet[2567]: E0320 18:09:01.617840 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.618222 kubelet[2567]: E0320 18:09:01.618103 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.618222 kubelet[2567]: W0320 18:09:01.618117 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.618222 kubelet[2567]: E0320 18:09:01.618132 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.618479 kubelet[2567]: E0320 18:09:01.618387 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.618479 kubelet[2567]: W0320 18:09:01.618399 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.618479 kubelet[2567]: E0320 18:09:01.618418 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.619048 kubelet[2567]: E0320 18:09:01.618797 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.619048 kubelet[2567]: W0320 18:09:01.618812 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.619048 kubelet[2567]: E0320 18:09:01.618827 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.622442 kubelet[2567]: E0320 18:09:01.622357 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.622442 kubelet[2567]: W0320 18:09:01.622372 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.622579 kubelet[2567]: E0320 18:09:01.622518 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.622893 kubelet[2567]: E0320 18:09:01.622797 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.622893 kubelet[2567]: W0320 18:09:01.622810 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.622893 kubelet[2567]: E0320 18:09:01.622821 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.623167 kubelet[2567]: E0320 18:09:01.623055 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.623167 kubelet[2567]: W0320 18:09:01.623066 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.623167 kubelet[2567]: E0320 18:09:01.623075 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.624432 kubelet[2567]: E0320 18:09:01.624417 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.627339 kubelet[2567]: W0320 18:09:01.624501 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.627339 kubelet[2567]: E0320 18:09:01.624517 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.637371 kubelet[2567]: E0320 18:09:01.635315 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.637371 kubelet[2567]: W0320 18:09:01.635336 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.637371 kubelet[2567]: E0320 18:09:01.635356 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.660519 containerd[1465]: time="2025-03-20T18:09:01.660476667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d456659c-9tf92,Uid:3bdceaf2-5c40-4a04-9462-f3c3fd7ab0be,Namespace:calico-system,Attempt:0,}" Mar 20 18:09:01.685741 containerd[1465]: time="2025-03-20T18:09:01.685699069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9lfmn,Uid:fdf181d2-988e-4e88-8ff1-a48cda2e5b5b,Namespace:calico-system,Attempt:0,}" Mar 20 18:09:01.701871 containerd[1465]: time="2025-03-20T18:09:01.701825768Z" level=info msg="connecting to shim 08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026" address="unix:///run/containerd/s/0a4faea41fbd76bbfba4cfd04f8be4a5b1c0aa78e0569b8dbaf0a14e54a60ae6" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:09:01.703684 containerd[1465]: time="2025-03-20T18:09:01.703654286Z" level=info msg="connecting to shim 5ba3703976df5b0f70cb7daffda380eb6dccc7fb43ebfd1c39228c2860a98e39" address="unix:///run/containerd/s/7e7f17e7b8236bc6f64f59bbafd700ff4ef7927ff9d76f354e545e9caa42aa63" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:09:01.716604 kubelet[2567]: E0320 18:09:01.716424 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.716604 kubelet[2567]: W0320 18:09:01.716443 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.716604 kubelet[2567]: E0320 18:09:01.716462 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.716939 kubelet[2567]: E0320 18:09:01.716925 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.717077 kubelet[2567]: W0320 18:09:01.717001 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.717077 kubelet[2567]: E0320 18:09:01.717020 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.717709 kubelet[2567]: E0320 18:09:01.717654 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.717709 kubelet[2567]: W0320 18:09:01.717668 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.717924 kubelet[2567]: E0320 18:09:01.717819 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.718211 kubelet[2567]: E0320 18:09:01.718197 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.718432 kubelet[2567]: W0320 18:09:01.718296 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.718432 kubelet[2567]: E0320 18:09:01.718345 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.718806 kubelet[2567]: E0320 18:09:01.718713 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.718806 kubelet[2567]: W0320 18:09:01.718750 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.718882 kubelet[2567]: E0320 18:09:01.718810 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.719516 kubelet[2567]: E0320 18:09:01.719418 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.719516 kubelet[2567]: W0320 18:09:01.719431 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.719516 kubelet[2567]: E0320 18:09:01.719487 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.719829 kubelet[2567]: E0320 18:09:01.719728 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.719829 kubelet[2567]: W0320 18:09:01.719740 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.719829 kubelet[2567]: E0320 18:09:01.719795 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.720059 kubelet[2567]: E0320 18:09:01.720021 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.720059 kubelet[2567]: W0320 18:09:01.720032 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.720251 kubelet[2567]: E0320 18:09:01.720196 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.720457 kubelet[2567]: E0320 18:09:01.720375 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.720457 kubelet[2567]: W0320 18:09:01.720386 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.720457 kubelet[2567]: E0320 18:09:01.720432 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.720685 kubelet[2567]: E0320 18:09:01.720637 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.720685 kubelet[2567]: W0320 18:09:01.720653 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.720933 kubelet[2567]: E0320 18:09:01.720900 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.721074 kubelet[2567]: E0320 18:09:01.721037 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.721074 kubelet[2567]: W0320 18:09:01.721048 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.721243 kubelet[2567]: E0320 18:09:01.721189 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.721388 kubelet[2567]: E0320 18:09:01.721363 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.721388 kubelet[2567]: W0320 18:09:01.721375 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.721614 kubelet[2567]: E0320 18:09:01.721592 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.722320 kubelet[2567]: E0320 18:09:01.721855 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.722320 kubelet[2567]: W0320 18:09:01.721870 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.722320 kubelet[2567]: E0320 18:09:01.721974 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.722320 kubelet[2567]: E0320 18:09:01.722030 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.722320 kubelet[2567]: W0320 18:09:01.722038 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.722320 kubelet[2567]: E0320 18:09:01.722074 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.722320 kubelet[2567]: E0320 18:09:01.722168 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.722320 kubelet[2567]: W0320 18:09:01.722175 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.722320 kubelet[2567]: E0320 18:09:01.722210 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.722462 systemd[1]: Started cri-containerd-08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026.scope - libcontainer container 08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026. Mar 20 18:09:01.722686 kubelet[2567]: E0320 18:09:01.722668 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.722686 kubelet[2567]: W0320 18:09:01.722681 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.722735 kubelet[2567]: E0320 18:09:01.722693 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.722945 kubelet[2567]: E0320 18:09:01.722930 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.722945 kubelet[2567]: W0320 18:09:01.722942 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.723007 kubelet[2567]: E0320 18:09:01.722978 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.723143 kubelet[2567]: E0320 18:09:01.723130 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.723143 kubelet[2567]: W0320 18:09:01.723141 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.723195 kubelet[2567]: E0320 18:09:01.723170 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.723294 kubelet[2567]: E0320 18:09:01.723272 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.723294 kubelet[2567]: W0320 18:09:01.723288 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.723430 kubelet[2567]: E0320 18:09:01.723420 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.723430 kubelet[2567]: W0320 18:09:01.723429 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.723586 kubelet[2567]: E0320 18:09:01.723573 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.723586 kubelet[2567]: W0320 18:09:01.723584 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.723637 kubelet[2567]: E0320 18:09:01.723593 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.723637 kubelet[2567]: E0320 18:09:01.723627 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.723680 kubelet[2567]: E0320 18:09:01.723646 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.723780 kubelet[2567]: E0320 18:09:01.723756 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.723780 kubelet[2567]: W0320 18:09:01.723765 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.723886 kubelet[2567]: E0320 18:09:01.723795 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.724087 kubelet[2567]: E0320 18:09:01.724039 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.724330 kubelet[2567]: W0320 18:09:01.724299 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.724383 kubelet[2567]: E0320 18:09:01.724330 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.724745 kubelet[2567]: E0320 18:09:01.724725 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.724745 kubelet[2567]: W0320 18:09:01.724742 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.725061 kubelet[2567]: E0320 18:09:01.724755 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.732832 systemd[1]: Started cri-containerd-5ba3703976df5b0f70cb7daffda380eb6dccc7fb43ebfd1c39228c2860a98e39.scope - libcontainer container 5ba3703976df5b0f70cb7daffda380eb6dccc7fb43ebfd1c39228c2860a98e39. Mar 20 18:09:01.733395 kubelet[2567]: E0320 18:09:01.733319 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.733395 kubelet[2567]: W0320 18:09:01.733337 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.733395 kubelet[2567]: E0320 18:09:01.733352 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.756763 kubelet[2567]: E0320 18:09:01.756723 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 18:09:01.756763 kubelet[2567]: W0320 18:09:01.756746 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 18:09:01.756763 kubelet[2567]: E0320 18:09:01.756769 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 18:09:01.760620 containerd[1465]: time="2025-03-20T18:09:01.760566692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9lfmn,Uid:fdf181d2-988e-4e88-8ff1-a48cda2e5b5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026\"" Mar 20 18:09:01.771912 containerd[1465]: time="2025-03-20T18:09:01.771877084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 20 18:09:01.813043 containerd[1465]: time="2025-03-20T18:09:01.812966751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d456659c-9tf92,Uid:3bdceaf2-5c40-4a04-9462-f3c3fd7ab0be,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ba3703976df5b0f70cb7daffda380eb6dccc7fb43ebfd1c39228c2860a98e39\"" Mar 20 18:09:02.540136 containerd[1465]: time="2025-03-20T18:09:02.540067890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:02.542570 containerd[1465]: time="2025-03-20T18:09:02.542486150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 20 18:09:02.544160 containerd[1465]: time="2025-03-20T18:09:02.543318413Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:02.545013 containerd[1465]: time="2025-03-20T18:09:02.544896849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:02.546446 containerd[1465]: time="2025-03-20T18:09:02.546372832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 774.453263ms" Mar 20 18:09:02.546446 containerd[1465]: time="2025-03-20T18:09:02.546420118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 20 18:09:02.547652 containerd[1465]: time="2025-03-20T18:09:02.547623867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 20 18:09:02.551494 containerd[1465]: time="2025-03-20T18:09:02.551454343Z" level=info msg="CreateContainer within sandbox \"08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 20 18:09:02.559104 containerd[1465]: time="2025-03-20T18:09:02.559063087Z" level=info msg="Container 68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:02.567477 containerd[1465]: time="2025-03-20T18:09:02.567434246Z" level=info msg="CreateContainer within sandbox \"08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b\"" Mar 20 18:09:02.568105 containerd[1465]: time="2025-03-20T18:09:02.568081286Z" level=info msg="StartContainer for \"68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b\"" Mar 20 18:09:02.569654 containerd[1465]: time="2025-03-20T18:09:02.569619957Z" level=info msg="connecting to shim 68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b" address="unix:///run/containerd/s/0a4faea41fbd76bbfba4cfd04f8be4a5b1c0aa78e0569b8dbaf0a14e54a60ae6" protocol=ttrpc version=3 Mar 20 18:09:02.593563 systemd[1]: Started cri-containerd-68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b.scope - libcontainer container 68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b. Mar 20 18:09:02.654177 systemd[1]: cri-containerd-68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b.scope: Deactivated successfully. Mar 20 18:09:02.656333 systemd[1]: cri-containerd-68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b.scope: Consumed 36ms CPU time, 7.8M memory peak, 6.2M written to disk. Mar 20 18:09:02.699083 containerd[1465]: time="2025-03-20T18:09:02.699044701Z" level=info msg="StartContainer for \"68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b\" returns successfully" Mar 20 18:09:02.702648 containerd[1465]: time="2025-03-20T18:09:02.702601823Z" level=info msg="received exit event container_id:\"68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b\" id:\"68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b\" pid:3126 exited_at:{seconds:1742494142 nanos:666665282}" Mar 20 18:09:02.702749 containerd[1465]: time="2025-03-20T18:09:02.702696514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b\" id:\"68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b\" pid:3126 exited_at:{seconds:1742494142 nanos:666665282}" Mar 20 18:09:02.748260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68136685c2d1820fe76db922e43e59357727c5208bb482b7686e17d4d8d8320b-rootfs.mount: Deactivated successfully. Mar 20 18:09:02.946821 kubelet[2567]: E0320 18:09:02.946679 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsfqz" podUID="4565d58f-9b5f-479d-bb63-738e37af8858" Mar 20 18:09:04.946397 kubelet[2567]: E0320 18:09:04.946009 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsfqz" podUID="4565d58f-9b5f-479d-bb63-738e37af8858" Mar 20 18:09:05.098110 containerd[1465]: time="2025-03-20T18:09:05.098056166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:05.098651 containerd[1465]: time="2025-03-20T18:09:05.098583063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 20 18:09:05.099262 containerd[1465]: time="2025-03-20T18:09:05.099233614Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:05.101237 containerd[1465]: time="2025-03-20T18:09:05.101189785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:05.101829 containerd[1465]: time="2025-03-20T18:09:05.101809252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 2.554038807s" Mar 20 18:09:05.101871 containerd[1465]: time="2025-03-20T18:09:05.101835735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 20 18:09:05.102905 containerd[1465]: time="2025-03-20T18:09:05.102848045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 20 18:09:05.113309 containerd[1465]: time="2025-03-20T18:09:05.110582122Z" level=info msg="CreateContainer within sandbox \"5ba3703976df5b0f70cb7daffda380eb6dccc7fb43ebfd1c39228c2860a98e39\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 20 18:09:05.119908 containerd[1465]: time="2025-03-20T18:09:05.119868767Z" level=info msg="Container 9ffeeb207bc9fb28712335d844849e6b8f3e227e494e27675fb460fcd5c14dbf: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:05.128585 containerd[1465]: time="2025-03-20T18:09:05.128530544Z" level=info msg="CreateContainer within sandbox \"5ba3703976df5b0f70cb7daffda380eb6dccc7fb43ebfd1c39228c2860a98e39\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ffeeb207bc9fb28712335d844849e6b8f3e227e494e27675fb460fcd5c14dbf\"" Mar 20 18:09:05.130432 containerd[1465]: time="2025-03-20T18:09:05.129465965Z" level=info msg="StartContainer for \"9ffeeb207bc9fb28712335d844849e6b8f3e227e494e27675fb460fcd5c14dbf\"" Mar 20 18:09:05.130843 containerd[1465]: time="2025-03-20T18:09:05.130813711Z" level=info msg="connecting to shim 9ffeeb207bc9fb28712335d844849e6b8f3e227e494e27675fb460fcd5c14dbf" address="unix:///run/containerd/s/7e7f17e7b8236bc6f64f59bbafd700ff4ef7927ff9d76f354e545e9caa42aa63" protocol=ttrpc version=3 Mar 20 18:09:05.165505 systemd[1]: Started cri-containerd-9ffeeb207bc9fb28712335d844849e6b8f3e227e494e27675fb460fcd5c14dbf.scope - libcontainer container 9ffeeb207bc9fb28712335d844849e6b8f3e227e494e27675fb460fcd5c14dbf. Mar 20 18:09:05.207383 containerd[1465]: time="2025-03-20T18:09:05.203643313Z" level=info msg="StartContainer for \"9ffeeb207bc9fb28712335d844849e6b8f3e227e494e27675fb460fcd5c14dbf\" returns successfully" Mar 20 18:09:06.030703 kubelet[2567]: I0320 18:09:06.030639 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69d456659c-9tf92" podStartSLOduration=1.742685745 podStartE2EDuration="5.030624035s" podCreationTimestamp="2025-03-20 18:09:01 +0000 UTC" firstStartedPulling="2025-03-20 18:09:01.814502951 +0000 UTC m=+16.967461166" lastFinishedPulling="2025-03-20 18:09:05.102441241 +0000 UTC m=+20.255399456" observedRunningTime="2025-03-20 18:09:06.030451217 +0000 UTC m=+21.183409432" watchObservedRunningTime="2025-03-20 18:09:06.030624035 +0000 UTC m=+21.183582210" Mar 20 18:09:06.945373 kubelet[2567]: E0320 18:09:06.945319 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsfqz" podUID="4565d58f-9b5f-479d-bb63-738e37af8858" Mar 20 18:09:08.214638 containerd[1465]: time="2025-03-20T18:09:08.214589033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:08.215308 containerd[1465]: time="2025-03-20T18:09:08.215243256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 20 18:09:08.216159 containerd[1465]: time="2025-03-20T18:09:08.216106898Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:08.217872 containerd[1465]: time="2025-03-20T18:09:08.217840943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:08.219178 containerd[1465]: time="2025-03-20T18:09:08.219150427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 3.116268899s" Mar 20 18:09:08.219237 containerd[1465]: time="2025-03-20T18:09:08.219179070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 20 18:09:08.221048 containerd[1465]: time="2025-03-20T18:09:08.221012404Z" level=info msg="CreateContainer within sandbox \"08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 20 18:09:08.228079 containerd[1465]: time="2025-03-20T18:09:08.228041833Z" level=info msg="Container 383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:08.236713 containerd[1465]: time="2025-03-20T18:09:08.236667374Z" level=info msg="CreateContainer within sandbox \"08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b\"" Mar 20 18:09:08.237325 containerd[1465]: time="2025-03-20T18:09:08.237100535Z" level=info msg="StartContainer for \"383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b\"" Mar 20 18:09:08.238668 containerd[1465]: time="2025-03-20T18:09:08.238630440Z" level=info msg="connecting to shim 383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b" address="unix:///run/containerd/s/0a4faea41fbd76bbfba4cfd04f8be4a5b1c0aa78e0569b8dbaf0a14e54a60ae6" protocol=ttrpc version=3 Mar 20 18:09:08.262515 systemd[1]: Started cri-containerd-383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b.scope - libcontainer container 383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b. Mar 20 18:09:08.300028 containerd[1465]: time="2025-03-20T18:09:08.299991157Z" level=info msg="StartContainer for \"383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b\" returns successfully" Mar 20 18:09:08.754101 systemd[1]: cri-containerd-383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b.scope: Deactivated successfully. Mar 20 18:09:08.754384 systemd[1]: cri-containerd-383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b.scope: Consumed 438ms CPU time, 159.2M memory peak, 4K read from disk, 150.3M written to disk. Mar 20 18:09:08.757323 containerd[1465]: time="2025-03-20T18:09:08.757173327Z" level=info msg="received exit event container_id:\"383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b\" id:\"383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b\" pid:3230 exited_at:{seconds:1742494148 nanos:756996070}" Mar 20 18:09:08.757323 containerd[1465]: time="2025-03-20T18:09:08.757271457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b\" id:\"383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b\" pid:3230 exited_at:{seconds:1742494148 nanos:756996070}" Mar 20 18:09:08.776818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-383628f0c986c1a155ecc605e7a3d88212f2a63cd6c3597794ea23a702a0826b-rootfs.mount: Deactivated successfully. Mar 20 18:09:08.781804 kubelet[2567]: I0320 18:09:08.781684 2567 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 20 18:09:08.823812 systemd[1]: Created slice kubepods-besteffort-pod82f33d5f_10f5_47ef_a24f_c0b87da4b7ef.slice - libcontainer container kubepods-besteffort-pod82f33d5f_10f5_47ef_a24f_c0b87da4b7ef.slice. Mar 20 18:09:08.837688 systemd[1]: Created slice kubepods-burstable-pod4babc690_4f81_4ee4_89a2_e8d0442c4e6a.slice - libcontainer container kubepods-burstable-pod4babc690_4f81_4ee4_89a2_e8d0442c4e6a.slice. Mar 20 18:09:08.847708 systemd[1]: Created slice kubepods-besteffort-podad4bd5fc_9151_4c1c_8fc2_0e7cd0a5e558.slice - libcontainer container kubepods-besteffort-podad4bd5fc_9151_4c1c_8fc2_0e7cd0a5e558.slice. Mar 20 18:09:08.872960 systemd[1]: Created slice kubepods-burstable-podf0710fdf_b26c_4817_bfe9_669a165df7c1.slice - libcontainer container kubepods-burstable-podf0710fdf_b26c_4817_bfe9_669a165df7c1.slice. Mar 20 18:09:08.873359 kubelet[2567]: I0320 18:09:08.873169 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9kf7\" (UniqueName: \"kubernetes.io/projected/82f33d5f-10f5-47ef-a24f-c0b87da4b7ef-kube-api-access-g9kf7\") pod \"calico-apiserver-76d74f9c86-4ns2k\" (UID: \"82f33d5f-10f5-47ef-a24f-c0b87da4b7ef\") " pod="calico-apiserver/calico-apiserver-76d74f9c86-4ns2k" Mar 20 18:09:08.873359 kubelet[2567]: I0320 18:09:08.873207 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/82f33d5f-10f5-47ef-a24f-c0b87da4b7ef-calico-apiserver-certs\") pod \"calico-apiserver-76d74f9c86-4ns2k\" (UID: \"82f33d5f-10f5-47ef-a24f-c0b87da4b7ef\") " pod="calico-apiserver/calico-apiserver-76d74f9c86-4ns2k" Mar 20 18:09:08.880586 systemd[1]: Created slice kubepods-besteffort-pod71415888_c7b7_4f31_8d88_615151729410.slice - libcontainer container kubepods-besteffort-pod71415888_c7b7_4f31_8d88_615151729410.slice. Mar 20 18:09:08.951089 systemd[1]: Created slice kubepods-besteffort-pod4565d58f_9b5f_479d_bb63_738e37af8858.slice - libcontainer container kubepods-besteffort-pod4565d58f_9b5f_479d_bb63_738e37af8858.slice. Mar 20 18:09:08.953019 containerd[1465]: time="2025-03-20T18:09:08.952980754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsfqz,Uid:4565d58f-9b5f-479d-bb63-738e37af8858,Namespace:calico-system,Attempt:0,}" Mar 20 18:09:08.974495 kubelet[2567]: I0320 18:09:08.974382 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/71415888-c7b7-4f31-8d88-615151729410-calico-apiserver-certs\") pod \"calico-apiserver-76d74f9c86-ws9kv\" (UID: \"71415888-c7b7-4f31-8d88-615151729410\") " pod="calico-apiserver/calico-apiserver-76d74f9c86-ws9kv" Mar 20 18:09:08.974495 kubelet[2567]: I0320 18:09:08.974447 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0710fdf-b26c-4817-bfe9-669a165df7c1-config-volume\") pod \"coredns-6f6b679f8f-bx4cn\" (UID: \"f0710fdf-b26c-4817-bfe9-669a165df7c1\") " pod="kube-system/coredns-6f6b679f8f-bx4cn" Mar 20 18:09:08.974663 kubelet[2567]: I0320 18:09:08.974598 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkm8t\" (UniqueName: \"kubernetes.io/projected/f0710fdf-b26c-4817-bfe9-669a165df7c1-kube-api-access-fkm8t\") pod \"coredns-6f6b679f8f-bx4cn\" (UID: \"f0710fdf-b26c-4817-bfe9-669a165df7c1\") " pod="kube-system/coredns-6f6b679f8f-bx4cn" Mar 20 18:09:08.977395 kubelet[2567]: I0320 18:09:08.974878 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4babc690-4f81-4ee4-89a2-e8d0442c4e6a-config-volume\") pod \"coredns-6f6b679f8f-slp8s\" (UID: \"4babc690-4f81-4ee4-89a2-e8d0442c4e6a\") " pod="kube-system/coredns-6f6b679f8f-slp8s" Mar 20 18:09:08.983749 kubelet[2567]: I0320 18:09:08.974906 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbw77\" (UniqueName: \"kubernetes.io/projected/4babc690-4f81-4ee4-89a2-e8d0442c4e6a-kube-api-access-jbw77\") pod \"coredns-6f6b679f8f-slp8s\" (UID: \"4babc690-4f81-4ee4-89a2-e8d0442c4e6a\") " pod="kube-system/coredns-6f6b679f8f-slp8s" Mar 20 18:09:08.983931 kubelet[2567]: I0320 18:09:08.983894 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558-tigera-ca-bundle\") pod \"calico-kube-controllers-69bc6fc597-kl4kp\" (UID: \"ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558\") " pod="calico-system/calico-kube-controllers-69bc6fc597-kl4kp" Mar 20 18:09:08.984156 kubelet[2567]: I0320 18:09:08.984052 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmjw\" (UniqueName: \"kubernetes.io/projected/71415888-c7b7-4f31-8d88-615151729410-kube-api-access-dfmjw\") pod \"calico-apiserver-76d74f9c86-ws9kv\" (UID: \"71415888-c7b7-4f31-8d88-615151729410\") " pod="calico-apiserver/calico-apiserver-76d74f9c86-ws9kv" Mar 20 18:09:08.984156 kubelet[2567]: I0320 18:09:08.984095 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78xmz\" (UniqueName: \"kubernetes.io/projected/ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558-kube-api-access-78xmz\") pod \"calico-kube-controllers-69bc6fc597-kl4kp\" (UID: \"ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558\") " pod="calico-system/calico-kube-controllers-69bc6fc597-kl4kp" Mar 20 18:09:09.025136 containerd[1465]: time="2025-03-20T18:09:09.024927907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 20 18:09:09.127002 containerd[1465]: time="2025-03-20T18:09:09.126966383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d74f9c86-4ns2k,Uid:82f33d5f-10f5-47ef-a24f-c0b87da4b7ef,Namespace:calico-apiserver,Attempt:0,}" Mar 20 18:09:09.142970 containerd[1465]: time="2025-03-20T18:09:09.142932000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slp8s,Uid:4babc690-4f81-4ee4-89a2-e8d0442c4e6a,Namespace:kube-system,Attempt:0,}" Mar 20 18:09:09.155729 containerd[1465]: time="2025-03-20T18:09:09.155457944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bc6fc597-kl4kp,Uid:ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558,Namespace:calico-system,Attempt:0,}" Mar 20 18:09:09.176547 containerd[1465]: time="2025-03-20T18:09:09.176498665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bx4cn,Uid:f0710fdf-b26c-4817-bfe9-669a165df7c1,Namespace:kube-system,Attempt:0,}" Mar 20 18:09:09.192325 containerd[1465]: time="2025-03-20T18:09:09.191675810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d74f9c86-ws9kv,Uid:71415888-c7b7-4f31-8d88-615151729410,Namespace:calico-apiserver,Attempt:0,}" Mar 20 18:09:09.233999 containerd[1465]: time="2025-03-20T18:09:09.232512378Z" level=error msg="Failed to destroy network for sandbox \"1e92be17cff3c0666ef4fed33d7e372f8bec75365c1fb55cf207421529f916b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.236909 containerd[1465]: time="2025-03-20T18:09:09.236860975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d74f9c86-4ns2k,Uid:82f33d5f-10f5-47ef-a24f-c0b87da4b7ef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e92be17cff3c0666ef4fed33d7e372f8bec75365c1fb55cf207421529f916b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.243431 containerd[1465]: time="2025-03-20T18:09:09.242195982Z" level=error msg="Failed to destroy network for sandbox \"37754e6ad18cca31022ead6ae992cb37ab04780c8e6b3d1893adf023cef9a903\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.242749 systemd[1]: run-netns-cni\x2dddad814f\x2d5454\x2d9422\x2d5f07\x2ddf5de5aa4fd8.mount: Deactivated successfully. Mar 20 18:09:09.244012 containerd[1465]: time="2025-03-20T18:09:09.243779927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slp8s,Uid:4babc690-4f81-4ee4-89a2-e8d0442c4e6a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37754e6ad18cca31022ead6ae992cb37ab04780c8e6b3d1893adf023cef9a903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.245084 systemd[1]: run-netns-cni\x2dd7510493\x2d3ac6\x2d14eb\x2d4ea2\x2d5a71beb3a475.mount: Deactivated successfully. Mar 20 18:09:09.246508 kubelet[2567]: E0320 18:09:09.245267 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37754e6ad18cca31022ead6ae992cb37ab04780c8e6b3d1893adf023cef9a903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.246508 kubelet[2567]: E0320 18:09:09.245762 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e92be17cff3c0666ef4fed33d7e372f8bec75365c1fb55cf207421529f916b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.246857 kubelet[2567]: E0320 18:09:09.245782 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37754e6ad18cca31022ead6ae992cb37ab04780c8e6b3d1893adf023cef9a903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slp8s" Mar 20 18:09:09.246910 kubelet[2567]: E0320 18:09:09.246861 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37754e6ad18cca31022ead6ae992cb37ab04780c8e6b3d1893adf023cef9a903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slp8s" Mar 20 18:09:09.246932 kubelet[2567]: E0320 18:09:09.246910 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-slp8s_kube-system(4babc690-4f81-4ee4-89a2-e8d0442c4e6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-slp8s_kube-system(4babc690-4f81-4ee4-89a2-e8d0442c4e6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37754e6ad18cca31022ead6ae992cb37ab04780c8e6b3d1893adf023cef9a903\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-slp8s" podUID="4babc690-4f81-4ee4-89a2-e8d0442c4e6a" Mar 20 18:09:09.247108 kubelet[2567]: E0320 18:09:09.246999 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e92be17cff3c0666ef4fed33d7e372f8bec75365c1fb55cf207421529f916b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d74f9c86-4ns2k" Mar 20 18:09:09.247108 kubelet[2567]: E0320 18:09:09.247029 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e92be17cff3c0666ef4fed33d7e372f8bec75365c1fb55cf207421529f916b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d74f9c86-4ns2k" Mar 20 18:09:09.247108 kubelet[2567]: E0320 18:09:09.247063 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d74f9c86-4ns2k_calico-apiserver(82f33d5f-10f5-47ef-a24f-c0b87da4b7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d74f9c86-4ns2k_calico-apiserver(82f33d5f-10f5-47ef-a24f-c0b87da4b7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e92be17cff3c0666ef4fed33d7e372f8bec75365c1fb55cf207421529f916b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d74f9c86-4ns2k" podUID="82f33d5f-10f5-47ef-a24f-c0b87da4b7ef" Mar 20 18:09:09.253479 containerd[1465]: time="2025-03-20T18:09:09.253442129Z" level=error msg="Failed to destroy network for sandbox \"b2ffbf5cd934c4b459221bba6314448477ccbbaca8e1849fa6087e401b77d396\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.254976 containerd[1465]: time="2025-03-20T18:09:09.254908823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsfqz,Uid:4565d58f-9b5f-479d-bb63-738e37af8858,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ffbf5cd934c4b459221bba6314448477ccbbaca8e1849fa6087e401b77d396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.256755 systemd[1]: run-netns-cni\x2d8c606989\x2db345\x2d7abe\x2dd30f\x2dc8faf3a8c6f2.mount: Deactivated successfully. Mar 20 18:09:09.257728 kubelet[2567]: E0320 18:09:09.257538 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ffbf5cd934c4b459221bba6314448477ccbbaca8e1849fa6087e401b77d396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.257728 kubelet[2567]: E0320 18:09:09.257597 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ffbf5cd934c4b459221bba6314448477ccbbaca8e1849fa6087e401b77d396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bsfqz" Mar 20 18:09:09.257728 kubelet[2567]: E0320 18:09:09.257616 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ffbf5cd934c4b459221bba6314448477ccbbaca8e1849fa6087e401b77d396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bsfqz" Mar 20 18:09:09.258047 kubelet[2567]: E0320 18:09:09.257658 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bsfqz_calico-system(4565d58f-9b5f-479d-bb63-738e37af8858)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bsfqz_calico-system(4565d58f-9b5f-479d-bb63-738e37af8858)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2ffbf5cd934c4b459221bba6314448477ccbbaca8e1849fa6087e401b77d396\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bsfqz" podUID="4565d58f-9b5f-479d-bb63-738e37af8858" Mar 20 18:09:09.263093 containerd[1465]: time="2025-03-20T18:09:09.263054567Z" level=error msg="Failed to destroy network for sandbox \"30e23c07a6dff7bd1248e640b84673d57758abf6c2f0aaf25c469ad52e128176\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.264102 containerd[1465]: time="2025-03-20T18:09:09.264066099Z" level=error msg="Failed to destroy network for sandbox \"e00afccc4f6e3730dfb7141a08158f630f91a99002c5f555a64d01f0603b0c4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.264151 containerd[1465]: time="2025-03-20T18:09:09.264075740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bc6fc597-kl4kp,Uid:ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30e23c07a6dff7bd1248e640b84673d57758abf6c2f0aaf25c469ad52e128176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.264434 kubelet[2567]: E0320 18:09:09.264399 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30e23c07a6dff7bd1248e640b84673d57758abf6c2f0aaf25c469ad52e128176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.264476 kubelet[2567]: E0320 18:09:09.264456 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30e23c07a6dff7bd1248e640b84673d57758abf6c2f0aaf25c469ad52e128176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69bc6fc597-kl4kp" Mar 20 18:09:09.264508 kubelet[2567]: E0320 18:09:09.264479 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30e23c07a6dff7bd1248e640b84673d57758abf6c2f0aaf25c469ad52e128176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69bc6fc597-kl4kp" Mar 20 18:09:09.264546 kubelet[2567]: E0320 18:09:09.264521 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69bc6fc597-kl4kp_calico-system(ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69bc6fc597-kl4kp_calico-system(ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30e23c07a6dff7bd1248e640b84673d57758abf6c2f0aaf25c469ad52e128176\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69bc6fc597-kl4kp" podUID="ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558" Mar 20 18:09:09.265548 containerd[1465]: time="2025-03-20T18:09:09.265371178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bx4cn,Uid:f0710fdf-b26c-4817-bfe9-669a165df7c1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00afccc4f6e3730dfb7141a08158f630f91a99002c5f555a64d01f0603b0c4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.265735 systemd[1]: run-netns-cni\x2d8f003d70\x2d815c\x2d3caf\x2db614\x2d22afa31f71f8.mount: Deactivated successfully. Mar 20 18:09:09.266692 kubelet[2567]: E0320 18:09:09.266503 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00afccc4f6e3730dfb7141a08158f630f91a99002c5f555a64d01f0603b0c4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.266692 kubelet[2567]: E0320 18:09:09.266547 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00afccc4f6e3730dfb7141a08158f630f91a99002c5f555a64d01f0603b0c4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bx4cn" Mar 20 18:09:09.266692 kubelet[2567]: E0320 18:09:09.266564 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00afccc4f6e3730dfb7141a08158f630f91a99002c5f555a64d01f0603b0c4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bx4cn" Mar 20 18:09:09.266820 kubelet[2567]: E0320 18:09:09.266607 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bx4cn_kube-system(f0710fdf-b26c-4817-bfe9-669a165df7c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bx4cn_kube-system(f0710fdf-b26c-4817-bfe9-669a165df7c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e00afccc4f6e3730dfb7141a08158f630f91a99002c5f555a64d01f0603b0c4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bx4cn" podUID="f0710fdf-b26c-4817-bfe9-669a165df7c1" Mar 20 18:09:09.277258 containerd[1465]: time="2025-03-20T18:09:09.277148973Z" level=error msg="Failed to destroy network for sandbox \"62be657e9bfd3451d788078bb78d47200fe0c5215c86646f32d38abca45e2aac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.278532 containerd[1465]: time="2025-03-20T18:09:09.278488856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d74f9c86-ws9kv,Uid:71415888-c7b7-4f31-8d88-615151729410,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62be657e9bfd3451d788078bb78d47200fe0c5215c86646f32d38abca45e2aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.278730 kubelet[2567]: E0320 18:09:09.278686 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62be657e9bfd3451d788078bb78d47200fe0c5215c86646f32d38abca45e2aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 18:09:09.278847 kubelet[2567]: E0320 18:09:09.278734 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62be657e9bfd3451d788078bb78d47200fe0c5215c86646f32d38abca45e2aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d74f9c86-ws9kv" Mar 20 18:09:09.278847 kubelet[2567]: E0320 18:09:09.278757 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62be657e9bfd3451d788078bb78d47200fe0c5215c86646f32d38abca45e2aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d74f9c86-ws9kv" Mar 20 18:09:09.278847 kubelet[2567]: E0320 18:09:09.278787 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d74f9c86-ws9kv_calico-apiserver(71415888-c7b7-4f31-8d88-615151729410)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d74f9c86-ws9kv_calico-apiserver(71415888-c7b7-4f31-8d88-615151729410)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62be657e9bfd3451d788078bb78d47200fe0c5215c86646f32d38abca45e2aac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d74f9c86-ws9kv" podUID="71415888-c7b7-4f31-8d88-615151729410" Mar 20 18:09:10.231672 systemd[1]: run-netns-cni\x2d829c1fdf\x2d28b9\x2dd74c\x2d5458\x2d38e5569fc5f7.mount: Deactivated successfully. Mar 20 18:09:10.231783 systemd[1]: run-netns-cni\x2d6f48ac70\x2db6f2\x2df608\x2d1c02\x2de0c31a562f3a.mount: Deactivated successfully. Mar 20 18:09:11.880152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377221489.mount: Deactivated successfully. Mar 20 18:09:12.156622 containerd[1465]: time="2025-03-20T18:09:12.156497327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:12.157200 containerd[1465]: time="2025-03-20T18:09:12.157146380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 20 18:09:12.157984 containerd[1465]: time="2025-03-20T18:09:12.157945645Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:12.159688 containerd[1465]: time="2025-03-20T18:09:12.159633822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:12.160247 containerd[1465]: time="2025-03-20T18:09:12.160035694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 3.134971215s" Mar 20 18:09:12.160247 containerd[1465]: time="2025-03-20T18:09:12.160070097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 20 18:09:12.168344 containerd[1465]: time="2025-03-20T18:09:12.168300845Z" level=info msg="CreateContainer within sandbox \"08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 20 18:09:12.175605 containerd[1465]: time="2025-03-20T18:09:12.175574316Z" level=info msg="Container 2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:12.185867 containerd[1465]: time="2025-03-20T18:09:12.185750382Z" level=info msg="CreateContainer within sandbox \"08aaf480cca53cd9b9f9f717e02c55abf3097b35a3670c35cb4d4953fbe8e026\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675\"" Mar 20 18:09:12.186363 containerd[1465]: time="2025-03-20T18:09:12.186338789Z" level=info msg="StartContainer for \"2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675\"" Mar 20 18:09:12.187732 containerd[1465]: time="2025-03-20T18:09:12.187692659Z" level=info msg="connecting to shim 2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675" address="unix:///run/containerd/s/0a4faea41fbd76bbfba4cfd04f8be4a5b1c0aa78e0569b8dbaf0a14e54a60ae6" protocol=ttrpc version=3 Mar 20 18:09:12.203438 systemd[1]: Started cri-containerd-2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675.scope - libcontainer container 2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675. Mar 20 18:09:12.288478 containerd[1465]: time="2025-03-20T18:09:12.288432677Z" level=info msg="StartContainer for \"2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675\" returns successfully" Mar 20 18:09:12.407963 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 20 18:09:12.408061 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 20 18:09:12.872111 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:44678.service - OpenSSH per-connection server daemon (10.0.0.1:44678). Mar 20 18:09:12.934702 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 44678 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:12.936151 sshd-session[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:12.940101 systemd-logind[1442]: New session 8 of user core. Mar 20 18:09:12.949448 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 18:09:13.056643 kubelet[2567]: I0320 18:09:13.055376 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9lfmn" podStartSLOduration=1.6662295070000002 podStartE2EDuration="12.055360011s" podCreationTimestamp="2025-03-20 18:09:01 +0000 UTC" firstStartedPulling="2025-03-20 18:09:01.771585486 +0000 UTC m=+16.924543701" lastFinishedPulling="2025-03-20 18:09:12.16071603 +0000 UTC m=+27.313674205" observedRunningTime="2025-03-20 18:09:13.054475022 +0000 UTC m=+28.207433237" watchObservedRunningTime="2025-03-20 18:09:13.055360011 +0000 UTC m=+28.208318226" Mar 20 18:09:13.081434 sshd[3566]: Connection closed by 10.0.0.1 port 44678 Mar 20 18:09:13.081751 sshd-session[3564]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:13.084836 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 18:09:13.086263 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Mar 20 18:09:13.086756 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:44678.service: Deactivated successfully. Mar 20 18:09:13.090447 systemd-logind[1442]: Removed session 8. Mar 20 18:09:13.157494 containerd[1465]: time="2025-03-20T18:09:13.157374351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675\" id:\"65032657ad4f681dc5608848e2f55e9cdfb4a9b94e572dfb7bfb5ebc6b7cef46\" pid:3592 exit_status:1 exited_at:{seconds:1742494153 nanos:157075487}" Mar 20 18:09:13.869776 kernel: bpftool[3736]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 20 18:09:14.032270 systemd-networkd[1399]: vxlan.calico: Link UP Mar 20 18:09:14.032290 systemd-networkd[1399]: vxlan.calico: Gained carrier Mar 20 18:09:14.111466 containerd[1465]: time="2025-03-20T18:09:14.111154405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675\" id:\"53ab2865189a2430d56f83ea015afba5d33ee9c298a0473959101c263f80dbd2\" pid:3779 exit_status:1 exited_at:{seconds:1742494154 nanos:110561681}" Mar 20 18:09:15.102588 containerd[1465]: time="2025-03-20T18:09:15.102546094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675\" id:\"bc82ed951216cf3d3fe023aa785976c2518c705c9acf058b090a8c396b8eb034\" pid:3844 exit_status:1 exited_at:{seconds:1742494155 nanos:101479576}" Mar 20 18:09:15.892869 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Mar 20 18:09:18.100169 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:44680.service - OpenSSH per-connection server daemon (10.0.0.1:44680). Mar 20 18:09:18.159798 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 44680 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:18.161268 sshd-session[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:18.167071 systemd-logind[1442]: New session 9 of user core. Mar 20 18:09:18.177443 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 18:09:18.318399 sshd[3861]: Connection closed by 10.0.0.1 port 44680 Mar 20 18:09:18.318700 sshd-session[3859]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:18.322950 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:44680.service: Deactivated successfully. Mar 20 18:09:18.325704 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 18:09:18.326371 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Mar 20 18:09:18.327433 systemd-logind[1442]: Removed session 9. Mar 20 18:09:20.945561 containerd[1465]: time="2025-03-20T18:09:20.945505542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slp8s,Uid:4babc690-4f81-4ee4-89a2-e8d0442c4e6a,Namespace:kube-system,Attempt:0,}" Mar 20 18:09:20.946023 containerd[1465]: time="2025-03-20T18:09:20.945930328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsfqz,Uid:4565d58f-9b5f-479d-bb63-738e37af8858,Namespace:calico-system,Attempt:0,}" Mar 20 18:09:20.946023 containerd[1465]: time="2025-03-20T18:09:20.945950369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bc6fc597-kl4kp,Uid:ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558,Namespace:calico-system,Attempt:0,}" Mar 20 18:09:21.259128 systemd-networkd[1399]: calid91152c944c: Link UP Mar 20 18:09:21.260068 systemd-networkd[1399]: calid91152c944c: Gained carrier Mar 20 18:09:21.274591 containerd[1465]: 2025-03-20 18:09:21.097 [INFO][3895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bsfqz-eth0 csi-node-driver- calico-system 4565d58f-9b5f-479d-bb63-738e37af8858 589 0 2025-03-20 18:09:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bsfqz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid91152c944c [] []}} ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Namespace="calico-system" Pod="csi-node-driver-bsfqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--bsfqz-" Mar 20 18:09:21.274591 containerd[1465]: 2025-03-20 18:09:21.097 [INFO][3895] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Namespace="calico-system" Pod="csi-node-driver-bsfqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--bsfqz-eth0" Mar 20 18:09:21.274591 containerd[1465]: 2025-03-20 18:09:21.212 [INFO][3924] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" HandleID="k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Workload="localhost-k8s-csi--node--driver--bsfqz-eth0" Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.226 [INFO][3924] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" HandleID="k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Workload="localhost-k8s-csi--node--driver--bsfqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bsfqz", "timestamp":"2025-03-20 18:09:21.212600549 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.226 [INFO][3924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.226 [INFO][3924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.226 [INFO][3924] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.228 [INFO][3924] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" host="localhost" Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.234 [INFO][3924] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.237 [INFO][3924] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.239 [INFO][3924] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.241 [INFO][3924] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:21.274793 containerd[1465]: 2025-03-20 18:09:21.241 [INFO][3924] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" host="localhost" Mar 20 18:09:21.276039 containerd[1465]: 2025-03-20 18:09:21.242 [INFO][3924] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b Mar 20 18:09:21.276039 containerd[1465]: 2025-03-20 18:09:21.245 [INFO][3924] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" host="localhost" Mar 20 18:09:21.276039 containerd[1465]: 2025-03-20 18:09:21.251 [INFO][3924] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" host="localhost" Mar 20 18:09:21.276039 containerd[1465]: 2025-03-20 18:09:21.251 [INFO][3924] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" host="localhost" Mar 20 18:09:21.276039 containerd[1465]: 2025-03-20 18:09:21.251 [INFO][3924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 18:09:21.276039 containerd[1465]: 2025-03-20 18:09:21.251 [INFO][3924] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" HandleID="k8s-pod-network.7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Workload="localhost-k8s-csi--node--driver--bsfqz-eth0" Mar 20 18:09:21.276187 containerd[1465]: 2025-03-20 18:09:21.254 [INFO][3895] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Namespace="calico-system" Pod="csi-node-driver-bsfqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--bsfqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bsfqz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4565d58f-9b5f-479d-bb63-738e37af8858", ResourceVersion:"589", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bsfqz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid91152c944c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:21.276187 containerd[1465]: 2025-03-20 18:09:21.254 [INFO][3895] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Namespace="calico-system" Pod="csi-node-driver-bsfqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--bsfqz-eth0" Mar 20 18:09:21.276306 containerd[1465]: 2025-03-20 18:09:21.254 [INFO][3895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid91152c944c ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Namespace="calico-system" Pod="csi-node-driver-bsfqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--bsfqz-eth0" Mar 20 18:09:21.276306 containerd[1465]: 2025-03-20 18:09:21.260 [INFO][3895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Namespace="calico-system" Pod="csi-node-driver-bsfqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--bsfqz-eth0" Mar 20 18:09:21.276904 containerd[1465]: 2025-03-20 18:09:21.260 [INFO][3895] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Namespace="calico-system" Pod="csi-node-driver-bsfqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--bsfqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bsfqz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4565d58f-9b5f-479d-bb63-738e37af8858", ResourceVersion:"589", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b", Pod:"csi-node-driver-bsfqz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid91152c944c", MAC:"ba:0a:38:63:1d:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:21.276986 containerd[1465]: 2025-03-20 18:09:21.271 [INFO][3895] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" Namespace="calico-system" Pod="csi-node-driver-bsfqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--bsfqz-eth0" Mar 20 18:09:21.348346 containerd[1465]: time="2025-03-20T18:09:21.348268292Z" level=info msg="connecting to shim 7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b" address="unix:///run/containerd/s/936ff5f1213a1b6a243a0a1decb0108753107af46f7b21f9f7f4a0659919a708" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:09:21.361593 systemd-networkd[1399]: cali7ba48c82836: Link UP Mar 20 18:09:21.361909 systemd-networkd[1399]: cali7ba48c82836: Gained carrier Mar 20 18:09:21.376816 containerd[1465]: 2025-03-20 18:09:21.097 [INFO][3901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0 calico-kube-controllers-69bc6fc597- calico-system ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558 666 0 2025-03-20 18:09:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69bc6fc597 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69bc6fc597-kl4kp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7ba48c82836 [] []}} ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Namespace="calico-system" Pod="calico-kube-controllers-69bc6fc597-kl4kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-" Mar 20 18:09:21.376816 containerd[1465]: 2025-03-20 18:09:21.097 [INFO][3901] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Namespace="calico-system" Pod="calico-kube-controllers-69bc6fc597-kl4kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" Mar 20 18:09:21.376816 containerd[1465]: 2025-03-20 18:09:21.212 [INFO][3926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" HandleID="k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Workload="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.230 [INFO][3926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" HandleID="k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Workload="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cce0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69bc6fc597-kl4kp", "timestamp":"2025-03-20 18:09:21.21260179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.230 [INFO][3926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.251 [INFO][3926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.251 [INFO][3926] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.331 [INFO][3926] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" host="localhost" Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.335 [INFO][3926] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.339 [INFO][3926] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.340 [INFO][3926] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.342 [INFO][3926] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:21.377036 containerd[1465]: 2025-03-20 18:09:21.342 [INFO][3926] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" host="localhost" Mar 20 18:09:21.377227 containerd[1465]: 2025-03-20 18:09:21.344 [INFO][3926] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048 Mar 20 18:09:21.377227 containerd[1465]: 2025-03-20 18:09:21.349 [INFO][3926] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" host="localhost" Mar 20 18:09:21.377227 containerd[1465]: 2025-03-20 18:09:21.354 [INFO][3926] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" host="localhost" Mar 20 18:09:21.377227 containerd[1465]: 2025-03-20 18:09:21.354 [INFO][3926] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" host="localhost" Mar 20 18:09:21.377227 containerd[1465]: 2025-03-20 18:09:21.354 [INFO][3926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 18:09:21.377227 containerd[1465]: 2025-03-20 18:09:21.354 [INFO][3926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" HandleID="k8s-pod-network.941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Workload="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" Mar 20 18:09:21.377663 containerd[1465]: 2025-03-20 18:09:21.357 [INFO][3901] cni-plugin/k8s.go 386: Populated endpoint ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Namespace="calico-system" Pod="calico-kube-controllers-69bc6fc597-kl4kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0", GenerateName:"calico-kube-controllers-69bc6fc597-", Namespace:"calico-system", SelfLink:"", UID:"ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69bc6fc597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69bc6fc597-kl4kp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ba48c82836", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:21.377753 containerd[1465]: 2025-03-20 18:09:21.358 [INFO][3901] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Namespace="calico-system" Pod="calico-kube-controllers-69bc6fc597-kl4kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" Mar 20 18:09:21.377753 containerd[1465]: 2025-03-20 18:09:21.358 [INFO][3901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ba48c82836 ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Namespace="calico-system" Pod="calico-kube-controllers-69bc6fc597-kl4kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" Mar 20 18:09:21.377753 containerd[1465]: 2025-03-20 18:09:21.361 [INFO][3901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Namespace="calico-system" Pod="calico-kube-controllers-69bc6fc597-kl4kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" Mar 20 18:09:21.377836 containerd[1465]: 2025-03-20 18:09:21.361 [INFO][3901] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Namespace="calico-system" Pod="calico-kube-controllers-69bc6fc597-kl4kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0", GenerateName:"calico-kube-controllers-69bc6fc597-", Namespace:"calico-system", SelfLink:"", UID:"ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69bc6fc597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048", Pod:"calico-kube-controllers-69bc6fc597-kl4kp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ba48c82836", MAC:"76:14:79:ad:0e:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:21.377932 containerd[1465]: 2025-03-20 18:09:21.372 [INFO][3901] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" Namespace="calico-system" Pod="calico-kube-controllers-69bc6fc597-kl4kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bc6fc597--kl4kp-eth0" Mar 20 18:09:21.379356 systemd[1]: Started cri-containerd-7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b.scope - libcontainer container 7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b. Mar 20 18:09:21.390288 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 18:09:21.400426 containerd[1465]: time="2025-03-20T18:09:21.400346673Z" level=info msg="connecting to shim 941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048" address="unix:///run/containerd/s/94a41ae1085dbff0068305963c170bc68f9fc1d31810baa70d7c9995ac88fdfa" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:09:21.403007 containerd[1465]: time="2025-03-20T18:09:21.402945350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsfqz,Uid:4565d58f-9b5f-479d-bb63-738e37af8858,Namespace:calico-system,Attempt:0,} returns sandbox id \"7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b\"" Mar 20 18:09:21.404536 containerd[1465]: time="2025-03-20T18:09:21.404503604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 20 18:09:21.424449 systemd[1]: Started cri-containerd-941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048.scope - libcontainer container 941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048. Mar 20 18:09:21.443522 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 18:09:21.462725 systemd-networkd[1399]: calib224456c58f: Link UP Mar 20 18:09:21.463176 systemd-networkd[1399]: calib224456c58f: Gained carrier Mar 20 18:09:21.477912 containerd[1465]: 2025-03-20 18:09:21.097 [INFO][3883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--slp8s-eth0 coredns-6f6b679f8f- kube-system 4babc690-4f81-4ee4-89a2-e8d0442c4e6a 669 0 2025-03-20 18:08:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-slp8s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib224456c58f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Namespace="kube-system" Pod="coredns-6f6b679f8f-slp8s" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slp8s-" Mar 20 18:09:21.477912 containerd[1465]: 2025-03-20 18:09:21.097 [INFO][3883] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Namespace="kube-system" Pod="coredns-6f6b679f8f-slp8s" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" Mar 20 18:09:21.477912 containerd[1465]: 2025-03-20 18:09:21.212 [INFO][3929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" HandleID="k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Workload="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.231 [INFO][3929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" HandleID="k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Workload="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9110), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-slp8s", "timestamp":"2025-03-20 18:09:21.212795761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.231 [INFO][3929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.354 [INFO][3929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.354 [INFO][3929] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.431 [INFO][3929] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" host="localhost" Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.436 [INFO][3929] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.440 [INFO][3929] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.442 [INFO][3929] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.444 [INFO][3929] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:21.478161 containerd[1465]: 2025-03-20 18:09:21.444 [INFO][3929] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" host="localhost" Mar 20 18:09:21.478379 containerd[1465]: 2025-03-20 18:09:21.446 [INFO][3929] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725 Mar 20 18:09:21.478379 containerd[1465]: 2025-03-20 18:09:21.449 [INFO][3929] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" host="localhost" Mar 20 18:09:21.478379 containerd[1465]: 2025-03-20 18:09:21.454 [INFO][3929] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" host="localhost" Mar 20 18:09:21.478379 containerd[1465]: 2025-03-20 18:09:21.454 [INFO][3929] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" host="localhost" Mar 20 18:09:21.478379 containerd[1465]: 2025-03-20 18:09:21.454 [INFO][3929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 18:09:21.478379 containerd[1465]: 2025-03-20 18:09:21.454 [INFO][3929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" HandleID="k8s-pod-network.fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Workload="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" Mar 20 18:09:21.478496 containerd[1465]: 2025-03-20 18:09:21.456 [INFO][3883] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Namespace="kube-system" Pod="coredns-6f6b679f8f-slp8s" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--slp8s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4babc690-4f81-4ee4-89a2-e8d0442c4e6a", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-slp8s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib224456c58f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:21.478550 containerd[1465]: 2025-03-20 18:09:21.458 [INFO][3883] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Namespace="kube-system" Pod="coredns-6f6b679f8f-slp8s" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" Mar 20 18:09:21.478550 containerd[1465]: 2025-03-20 18:09:21.458 [INFO][3883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib224456c58f ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Namespace="kube-system" Pod="coredns-6f6b679f8f-slp8s" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" Mar 20 18:09:21.478550 containerd[1465]: 2025-03-20 18:09:21.465 [INFO][3883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Namespace="kube-system" Pod="coredns-6f6b679f8f-slp8s" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" Mar 20 18:09:21.478609 containerd[1465]: 2025-03-20 18:09:21.465 [INFO][3883] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Namespace="kube-system" Pod="coredns-6f6b679f8f-slp8s" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--slp8s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4babc690-4f81-4ee4-89a2-e8d0442c4e6a", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725", Pod:"coredns-6f6b679f8f-slp8s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib224456c58f", MAC:"da:59:10:c9:ff:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:21.478609 containerd[1465]: 2025-03-20 18:09:21.474 [INFO][3883] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" Namespace="kube-system" Pod="coredns-6f6b679f8f-slp8s" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slp8s-eth0" Mar 20 18:09:21.481386 containerd[1465]: time="2025-03-20T18:09:21.481271914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bc6fc597-kl4kp,Uid:ad4bd5fc-9151-4c1c-8fc2-0e7cd0a5e558,Namespace:calico-system,Attempt:0,} returns sandbox id \"941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048\"" Mar 20 18:09:21.511015 containerd[1465]: time="2025-03-20T18:09:21.510471555Z" level=info msg="connecting to shim fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725" address="unix:///run/containerd/s/7bc2b9aaf62312d43d87fc8dff4aab07beecd1466a03c6f9cd2eeefd1ac19eae" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:09:21.537479 systemd[1]: Started cri-containerd-fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725.scope - libcontainer container fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725. Mar 20 18:09:21.549517 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 18:09:21.568139 containerd[1465]: time="2025-03-20T18:09:21.568103711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slp8s,Uid:4babc690-4f81-4ee4-89a2-e8d0442c4e6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725\"" Mar 20 18:09:21.570828 containerd[1465]: time="2025-03-20T18:09:21.570789753Z" level=info msg="CreateContainer within sandbox \"fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 18:09:21.582840 containerd[1465]: time="2025-03-20T18:09:21.582793677Z" level=info msg="Container d0bcc37c6d3ce91bafd3b31189afd4e5dfc3c2fedb9f2a089948c81ed782dfc8: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:21.588092 containerd[1465]: time="2025-03-20T18:09:21.588046913Z" level=info msg="CreateContainer within sandbox \"fda48ec276e3bfbf98f2b63d4049e5a010206a20c7f042a6bee6ae02e5ad3725\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0bcc37c6d3ce91bafd3b31189afd4e5dfc3c2fedb9f2a089948c81ed782dfc8\"" Mar 20 18:09:21.588730 containerd[1465]: time="2025-03-20T18:09:21.588697153Z" level=info msg="StartContainer for \"d0bcc37c6d3ce91bafd3b31189afd4e5dfc3c2fedb9f2a089948c81ed782dfc8\"" Mar 20 18:09:21.589482 containerd[1465]: time="2025-03-20T18:09:21.589449678Z" level=info msg="connecting to shim d0bcc37c6d3ce91bafd3b31189afd4e5dfc3c2fedb9f2a089948c81ed782dfc8" address="unix:///run/containerd/s/7bc2b9aaf62312d43d87fc8dff4aab07beecd1466a03c6f9cd2eeefd1ac19eae" protocol=ttrpc version=3 Mar 20 18:09:21.609547 systemd[1]: Started cri-containerd-d0bcc37c6d3ce91bafd3b31189afd4e5dfc3c2fedb9f2a089948c81ed782dfc8.scope - libcontainer container d0bcc37c6d3ce91bafd3b31189afd4e5dfc3c2fedb9f2a089948c81ed782dfc8. Mar 20 18:09:21.640419 containerd[1465]: time="2025-03-20T18:09:21.640372749Z" level=info msg="StartContainer for \"d0bcc37c6d3ce91bafd3b31189afd4e5dfc3c2fedb9f2a089948c81ed782dfc8\" returns successfully" Mar 20 18:09:21.945848 containerd[1465]: time="2025-03-20T18:09:21.945795930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bx4cn,Uid:f0710fdf-b26c-4817-bfe9-669a165df7c1,Namespace:kube-system,Attempt:0,}" Mar 20 18:09:21.946332 containerd[1465]: time="2025-03-20T18:09:21.945797330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d74f9c86-ws9kv,Uid:71415888-c7b7-4f31-8d88-615151729410,Namespace:calico-apiserver,Attempt:0,}" Mar 20 18:09:22.081415 systemd-networkd[1399]: calie4f24563690: Link UP Mar 20 18:09:22.081743 systemd-networkd[1399]: calie4f24563690: Gained carrier Mar 20 18:09:22.103324 kubelet[2567]: I0320 18:09:22.098918 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-slp8s" podStartSLOduration=31.098902723 podStartE2EDuration="31.098902723s" podCreationTimestamp="2025-03-20 18:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:09:22.093169066 +0000 UTC m=+37.246127281" watchObservedRunningTime="2025-03-20 18:09:22.098902723 +0000 UTC m=+37.251860938" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:21.997 [INFO][4170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0 calico-apiserver-76d74f9c86- calico-apiserver 71415888-c7b7-4f31-8d88-615151729410 668 0 2025-03-20 18:09:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d74f9c86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76d74f9c86-ws9kv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie4f24563690 [] []}} ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-ws9kv" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:21.997 [INFO][4170] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-ws9kv" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.023 [INFO][4196] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" HandleID="k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Workload="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.037 [INFO][4196] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" HandleID="k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Workload="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c1a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76d74f9c86-ws9kv", "timestamp":"2025-03-20 18:09:22.023357571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.037 [INFO][4196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.037 [INFO][4196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.037 [INFO][4196] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.039 [INFO][4196] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.043 [INFO][4196] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.047 [INFO][4196] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.049 [INFO][4196] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.051 [INFO][4196] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.051 [INFO][4196] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.052 [INFO][4196] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163 Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.055 [INFO][4196] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.064 [INFO][4196] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.064 [INFO][4196] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" host="localhost" Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.064 [INFO][4196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 18:09:22.103716 containerd[1465]: 2025-03-20 18:09:22.064 [INFO][4196] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" HandleID="k8s-pod-network.72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Workload="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" Mar 20 18:09:22.104123 containerd[1465]: 2025-03-20 18:09:22.068 [INFO][4170] cni-plugin/k8s.go 386: Populated endpoint ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-ws9kv" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0", GenerateName:"calico-apiserver-76d74f9c86-", Namespace:"calico-apiserver", SelfLink:"", UID:"71415888-c7b7-4f31-8d88-615151729410", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d74f9c86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76d74f9c86-ws9kv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4f24563690", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:22.104123 containerd[1465]: 2025-03-20 18:09:22.068 [INFO][4170] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-ws9kv" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" Mar 20 18:09:22.104123 containerd[1465]: 2025-03-20 18:09:22.068 [INFO][4170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4f24563690 ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-ws9kv" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" Mar 20 18:09:22.104123 containerd[1465]: 2025-03-20 18:09:22.079 [INFO][4170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-ws9kv" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" Mar 20 18:09:22.104123 containerd[1465]: 2025-03-20 18:09:22.082 [INFO][4170] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-ws9kv" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0", GenerateName:"calico-apiserver-76d74f9c86-", Namespace:"calico-apiserver", SelfLink:"", UID:"71415888-c7b7-4f31-8d88-615151729410", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d74f9c86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163", Pod:"calico-apiserver-76d74f9c86-ws9kv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4f24563690", MAC:"4a:aa:ce:5f:b6:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:22.104123 containerd[1465]: 2025-03-20 18:09:22.095 [INFO][4170] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-ws9kv" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--ws9kv-eth0" Mar 20 18:09:22.146076 containerd[1465]: time="2025-03-20T18:09:22.146017686Z" level=info msg="connecting to shim 72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163" address="unix:///run/containerd/s/15751f7a79f59d3a34cd478899a52be9113f1a27c2bc42d07f18dc794fa67cbe" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:09:22.178467 systemd[1]: Started cri-containerd-72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163.scope - libcontainer container 72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163. Mar 20 18:09:22.199483 systemd-networkd[1399]: cali0c99a8ea448: Link UP Mar 20 18:09:22.199833 systemd-networkd[1399]: cali0c99a8ea448: Gained carrier Mar 20 18:09:22.208605 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:21.997 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0 coredns-6f6b679f8f- kube-system f0710fdf-b26c-4817-bfe9-669a165df7c1 667 0 2025-03-20 18:08:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-bx4cn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0c99a8ea448 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bx4cn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bx4cn-" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:21.997 [INFO][4164] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bx4cn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.030 [INFO][4195] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" HandleID="k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Workload="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.043 [INFO][4195] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" HandleID="k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Workload="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502710), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-bx4cn", "timestamp":"2025-03-20 18:09:22.030095807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.043 [INFO][4195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.065 [INFO][4195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.065 [INFO][4195] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.142 [INFO][4195] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.151 [INFO][4195] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.159 [INFO][4195] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.161 [INFO][4195] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.164 [INFO][4195] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.164 [INFO][4195] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.166 [INFO][4195] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5 Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.172 [INFO][4195] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.180 [INFO][4195] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.180 [INFO][4195] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" host="localhost" Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.181 [INFO][4195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 18:09:22.213754 containerd[1465]: 2025-03-20 18:09:22.181 [INFO][4195] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" HandleID="k8s-pod-network.4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Workload="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" Mar 20 18:09:22.214237 containerd[1465]: 2025-03-20 18:09:22.196 [INFO][4164] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bx4cn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f0710fdf-b26c-4817-bfe9-669a165df7c1", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-bx4cn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c99a8ea448", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:22.214237 containerd[1465]: 2025-03-20 18:09:22.196 [INFO][4164] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bx4cn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" Mar 20 18:09:22.214237 containerd[1465]: 2025-03-20 18:09:22.196 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c99a8ea448 ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bx4cn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" Mar 20 18:09:22.214237 containerd[1465]: 2025-03-20 18:09:22.199 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bx4cn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" Mar 20 18:09:22.214237 containerd[1465]: 2025-03-20 18:09:22.201 [INFO][4164] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bx4cn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f0710fdf-b26c-4817-bfe9-669a165df7c1", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5", Pod:"coredns-6f6b679f8f-bx4cn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c99a8ea448", MAC:"1e:3a:7b:06:d9:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:22.214237 containerd[1465]: 2025-03-20 18:09:22.210 [INFO][4164] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bx4cn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bx4cn-eth0" Mar 20 18:09:22.240852 containerd[1465]: time="2025-03-20T18:09:22.240785485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d74f9c86-ws9kv,Uid:71415888-c7b7-4f31-8d88-615151729410,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163\"" Mar 20 18:09:22.255514 containerd[1465]: time="2025-03-20T18:09:22.255452865Z" level=info msg="connecting to shim 4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5" address="unix:///run/containerd/s/188729e8949a79d7d4cf8f6c5088169d1cc6d87c9e92cddfba8a8ef7618ed611" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:09:22.282589 systemd[1]: Started cri-containerd-4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5.scope - libcontainer container 4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5. Mar 20 18:09:22.301233 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 18:09:22.331294 containerd[1465]: time="2025-03-20T18:09:22.331243751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bx4cn,Uid:f0710fdf-b26c-4817-bfe9-669a165df7c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5\"" Mar 20 18:09:22.333906 containerd[1465]: time="2025-03-20T18:09:22.333572447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:22.335032 containerd[1465]: time="2025-03-20T18:09:22.334210245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 20 18:09:22.335032 containerd[1465]: time="2025-03-20T18:09:22.334296570Z" level=info msg="CreateContainer within sandbox \"4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 18:09:22.335032 containerd[1465]: time="2025-03-20T18:09:22.334881764Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:22.337996 containerd[1465]: time="2025-03-20T18:09:22.337960105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:22.338994 containerd[1465]: time="2025-03-20T18:09:22.338708829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 933.979971ms" Mar 20 18:09:22.338994 containerd[1465]: time="2025-03-20T18:09:22.338740031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 20 18:09:22.340972 containerd[1465]: time="2025-03-20T18:09:22.340919838Z" level=info msg="Container 6d1d68b900b9c5c1a62aa357a8e8f7a3d4db8733e0895b73193546c52c7b803c: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:22.343870 containerd[1465]: time="2025-03-20T18:09:22.343096486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 20 18:09:22.343870 containerd[1465]: time="2025-03-20T18:09:22.343776486Z" level=info msg="CreateContainer within sandbox \"7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 20 18:09:22.351563 containerd[1465]: time="2025-03-20T18:09:22.351512140Z" level=info msg="CreateContainer within sandbox \"4ab8788dac3c19c70ab48de748bb4e72004bcdfe35fa28bd609ca446df2174f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d1d68b900b9c5c1a62aa357a8e8f7a3d4db8733e0895b73193546c52c7b803c\"" Mar 20 18:09:22.351933 containerd[1465]: time="2025-03-20T18:09:22.351906483Z" level=info msg="StartContainer for \"6d1d68b900b9c5c1a62aa357a8e8f7a3d4db8733e0895b73193546c52c7b803c\"" Mar 20 18:09:22.353155 containerd[1465]: time="2025-03-20T18:09:22.352747972Z" level=info msg="connecting to shim 6d1d68b900b9c5c1a62aa357a8e8f7a3d4db8733e0895b73193546c52c7b803c" address="unix:///run/containerd/s/188729e8949a79d7d4cf8f6c5088169d1cc6d87c9e92cddfba8a8ef7618ed611" protocol=ttrpc version=3 Mar 20 18:09:22.360200 containerd[1465]: time="2025-03-20T18:09:22.360041240Z" level=info msg="Container ac192652b2403fcc9c3a5dbd3e24a27375545f99b68048bdfff3413d8adc34f1: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:22.371477 systemd[1]: Started cri-containerd-6d1d68b900b9c5c1a62aa357a8e8f7a3d4db8733e0895b73193546c52c7b803c.scope - libcontainer container 6d1d68b900b9c5c1a62aa357a8e8f7a3d4db8733e0895b73193546c52c7b803c. Mar 20 18:09:22.401129 containerd[1465]: time="2025-03-20T18:09:22.401028964Z" level=info msg="StartContainer for \"6d1d68b900b9c5c1a62aa357a8e8f7a3d4db8733e0895b73193546c52c7b803c\" returns successfully" Mar 20 18:09:22.403849 containerd[1465]: time="2025-03-20T18:09:22.403803687Z" level=info msg="CreateContainer within sandbox \"7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ac192652b2403fcc9c3a5dbd3e24a27375545f99b68048bdfff3413d8adc34f1\"" Mar 20 18:09:22.404745 containerd[1465]: time="2025-03-20T18:09:22.404529049Z" level=info msg="StartContainer for \"ac192652b2403fcc9c3a5dbd3e24a27375545f99b68048bdfff3413d8adc34f1\"" Mar 20 18:09:22.406588 containerd[1465]: time="2025-03-20T18:09:22.406557648Z" level=info msg="connecting to shim ac192652b2403fcc9c3a5dbd3e24a27375545f99b68048bdfff3413d8adc34f1" address="unix:///run/containerd/s/936ff5f1213a1b6a243a0a1decb0108753107af46f7b21f9f7f4a0659919a708" protocol=ttrpc version=3 Mar 20 18:09:22.429526 systemd[1]: Started cri-containerd-ac192652b2403fcc9c3a5dbd3e24a27375545f99b68048bdfff3413d8adc34f1.scope - libcontainer container ac192652b2403fcc9c3a5dbd3e24a27375545f99b68048bdfff3413d8adc34f1. Mar 20 18:09:22.468396 systemd-networkd[1399]: calid91152c944c: Gained IPv6LL Mar 20 18:09:22.469347 systemd-networkd[1399]: cali7ba48c82836: Gained IPv6LL Mar 20 18:09:22.539435 containerd[1465]: time="2025-03-20T18:09:22.538766643Z" level=info msg="StartContainer for \"ac192652b2403fcc9c3a5dbd3e24a27375545f99b68048bdfff3413d8adc34f1\" returns successfully" Mar 20 18:09:22.946175 containerd[1465]: time="2025-03-20T18:09:22.946097696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d74f9c86-4ns2k,Uid:82f33d5f-10f5-47ef-a24f-c0b87da4b7ef,Namespace:calico-apiserver,Attempt:0,}" Mar 20 18:09:22.981457 systemd-networkd[1399]: calib224456c58f: Gained IPv6LL Mar 20 18:09:23.099894 kubelet[2567]: I0320 18:09:23.099436 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bx4cn" podStartSLOduration=32.099418777 podStartE2EDuration="32.099418777s" podCreationTimestamp="2025-03-20 18:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:09:23.097957253 +0000 UTC m=+38.250915508" watchObservedRunningTime="2025-03-20 18:09:23.099418777 +0000 UTC m=+38.252376992" Mar 20 18:09:23.142272 systemd-networkd[1399]: cali8ceb6af84e1: Link UP Mar 20 18:09:23.142967 systemd-networkd[1399]: cali8ceb6af84e1: Gained carrier Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:22.982 [INFO][4407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0 calico-apiserver-76d74f9c86- calico-apiserver 82f33d5f-10f5-47ef-a24f-c0b87da4b7ef 665 0 2025-03-20 18:09:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d74f9c86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76d74f9c86-4ns2k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ceb6af84e1 [] []}} ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-4ns2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:22.983 [INFO][4407] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-4ns2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.012 [INFO][4420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" HandleID="k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Workload="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.023 [INFO][4420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" HandleID="k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Workload="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005bed40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76d74f9c86-4ns2k", "timestamp":"2025-03-20 18:09:23.012035547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.023 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.024 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.024 [INFO][4420] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.025 [INFO][4420] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.082 [INFO][4420] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.122 [INFO][4420] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.124 [INFO][4420] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.126 [INFO][4420] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.126 [INFO][4420] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.128 [INFO][4420] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.131 [INFO][4420] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.136 [INFO][4420] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.136 [INFO][4420] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" host="localhost" Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.136 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 18:09:23.156104 containerd[1465]: 2025-03-20 18:09:23.136 [INFO][4420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" HandleID="k8s-pod-network.d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Workload="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" Mar 20 18:09:23.156777 containerd[1465]: 2025-03-20 18:09:23.138 [INFO][4407] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-4ns2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0", GenerateName:"calico-apiserver-76d74f9c86-", Namespace:"calico-apiserver", SelfLink:"", UID:"82f33d5f-10f5-47ef-a24f-c0b87da4b7ef", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d74f9c86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76d74f9c86-4ns2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ceb6af84e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:23.156777 containerd[1465]: 2025-03-20 18:09:23.139 [INFO][4407] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-4ns2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" Mar 20 18:09:23.156777 containerd[1465]: 2025-03-20 18:09:23.139 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ceb6af84e1 ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-4ns2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" Mar 20 18:09:23.156777 containerd[1465]: 2025-03-20 18:09:23.143 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-4ns2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" Mar 20 18:09:23.156777 containerd[1465]: 2025-03-20 18:09:23.143 [INFO][4407] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-4ns2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0", GenerateName:"calico-apiserver-76d74f9c86-", Namespace:"calico-apiserver", SelfLink:"", UID:"82f33d5f-10f5-47ef-a24f-c0b87da4b7ef", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 18, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d74f9c86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e", Pod:"calico-apiserver-76d74f9c86-4ns2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ceb6af84e1", MAC:"7e:75:15:af:b9:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 18:09:23.156777 containerd[1465]: 2025-03-20 18:09:23.152 [INFO][4407] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" Namespace="calico-apiserver" Pod="calico-apiserver-76d74f9c86-4ns2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d74f9c86--4ns2k-eth0" Mar 20 18:09:23.225071 containerd[1465]: time="2025-03-20T18:09:23.223918526Z" level=info msg="connecting to shim d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e" address="unix:///run/containerd/s/957cc37fd571038f0b2f3806bc7b5cdb35ea4738694a982dd0d4acea8feedf95" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:09:23.256452 systemd[1]: Started cri-containerd-d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e.scope - libcontainer container d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e. Mar 20 18:09:23.267898 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 18:09:23.288998 containerd[1465]: time="2025-03-20T18:09:23.288959480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d74f9c86-4ns2k,Uid:82f33d5f-10f5-47ef-a24f-c0b87da4b7ef,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e\"" Mar 20 18:09:23.334807 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:35514.service - OpenSSH per-connection server daemon (10.0.0.1:35514). Mar 20 18:09:23.409490 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 35514 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:23.413783 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:23.418229 systemd-logind[1442]: New session 10 of user core. Mar 20 18:09:23.424428 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 18:09:23.492397 systemd-networkd[1399]: cali0c99a8ea448: Gained IPv6LL Mar 20 18:09:23.645323 sshd[4499]: Connection closed by 10.0.0.1 port 35514 Mar 20 18:09:23.647633 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:23.654719 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:35514.service: Deactivated successfully. Mar 20 18:09:23.656811 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 18:09:23.658427 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Mar 20 18:09:23.660151 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:35518.service - OpenSSH per-connection server daemon (10.0.0.1:35518). Mar 20 18:09:23.661584 systemd-logind[1442]: Removed session 10. Mar 20 18:09:23.698010 containerd[1465]: time="2025-03-20T18:09:23.697946155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:23.699038 containerd[1465]: time="2025-03-20T18:09:23.698991335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 20 18:09:23.700315 containerd[1465]: time="2025-03-20T18:09:23.700100758Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:23.702215 containerd[1465]: time="2025-03-20T18:09:23.702182877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:23.703170 containerd[1465]: time="2025-03-20T18:09:23.703123051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 1.359989362s" Mar 20 18:09:23.703170 containerd[1465]: time="2025-03-20T18:09:23.703158453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 20 18:09:23.704773 containerd[1465]: time="2025-03-20T18:09:23.704748583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 20 18:09:23.712509 containerd[1465]: time="2025-03-20T18:09:23.712476305Z" level=info msg="CreateContainer within sandbox \"941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 20 18:09:23.720818 containerd[1465]: time="2025-03-20T18:09:23.720777019Z" level=info msg="Container ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:23.726604 containerd[1465]: time="2025-03-20T18:09:23.726570269Z" level=info msg="CreateContainer within sandbox \"941f5f604792ae21debefb786abd44548078d410b8590e433c0e7162ff30e048\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd\"" Mar 20 18:09:23.728669 containerd[1465]: time="2025-03-20T18:09:23.727193385Z" level=info msg="StartContainer for \"ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd\"" Mar 20 18:09:23.728669 containerd[1465]: time="2025-03-20T18:09:23.728121638Z" level=info msg="connecting to shim ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd" address="unix:///run/containerd/s/94a41ae1085dbff0068305963c170bc68f9fc1d31810baa70d7c9995ac88fdfa" protocol=ttrpc version=3 Mar 20 18:09:23.746992 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 35518 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:23.748050 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:23.748580 systemd[1]: Started cri-containerd-ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd.scope - libcontainer container ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd. Mar 20 18:09:23.754161 systemd-logind[1442]: New session 11 of user core. Mar 20 18:09:23.754872 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 18:09:23.794447 containerd[1465]: time="2025-03-20T18:09:23.794404143Z" level=info msg="StartContainer for \"ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd\" returns successfully" Mar 20 18:09:23.814002 systemd-networkd[1399]: calie4f24563690: Gained IPv6LL Mar 20 18:09:23.969082 sshd[4539]: Connection closed by 10.0.0.1 port 35518 Mar 20 18:09:23.968933 sshd-session[4514]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:23.983344 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:35518.service: Deactivated successfully. Mar 20 18:09:23.985993 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 18:09:23.988151 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Mar 20 18:09:23.995327 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:35530.service - OpenSSH per-connection server daemon (10.0.0.1:35530). Mar 20 18:09:23.996250 systemd-logind[1442]: Removed session 11. Mar 20 18:09:24.055333 sshd[4562]: Accepted publickey for core from 10.0.0.1 port 35530 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:24.056552 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:24.060554 systemd-logind[1442]: New session 12 of user core. Mar 20 18:09:24.071461 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 18:09:24.110544 kubelet[2567]: I0320 18:09:24.109943 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69bc6fc597-kl4kp" podStartSLOduration=20.889905378 podStartE2EDuration="23.109923642s" podCreationTimestamp="2025-03-20 18:09:01 +0000 UTC" firstStartedPulling="2025-03-20 18:09:21.48386355 +0000 UTC m=+36.636821765" lastFinishedPulling="2025-03-20 18:09:23.703881854 +0000 UTC m=+38.856840029" observedRunningTime="2025-03-20 18:09:24.109121918 +0000 UTC m=+39.262080133" watchObservedRunningTime="2025-03-20 18:09:24.109923642 +0000 UTC m=+39.262881857" Mar 20 18:09:24.144543 containerd[1465]: time="2025-03-20T18:09:24.144501046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd\" id:\"bce09cb640dfc1e03b8ea660b7cca6f1c99c95bbb5879b3e9f0eefcb9d609b57\" pid:4578 exited_at:{seconds:1742494164 nanos:144136266}" Mar 20 18:09:24.260249 sshd[4565]: Connection closed by 10.0.0.1 port 35530 Mar 20 18:09:24.260772 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:24.264017 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:35530.service: Deactivated successfully. Mar 20 18:09:24.265897 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 18:09:24.266527 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Mar 20 18:09:24.267271 systemd-logind[1442]: Removed session 12. Mar 20 18:09:24.388454 systemd-networkd[1399]: cali8ceb6af84e1: Gained IPv6LL Mar 20 18:09:25.021177 containerd[1465]: time="2025-03-20T18:09:25.021127402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:25.022131 containerd[1465]: time="2025-03-20T18:09:25.022077574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 20 18:09:25.022778 containerd[1465]: time="2025-03-20T18:09:25.022751730Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:25.024786 containerd[1465]: time="2025-03-20T18:09:25.024745479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:25.025616 containerd[1465]: time="2025-03-20T18:09:25.025581524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 1.320648091s" Mar 20 18:09:25.025662 containerd[1465]: time="2025-03-20T18:09:25.025616686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 20 18:09:25.026762 containerd[1465]: time="2025-03-20T18:09:25.026556657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 20 18:09:25.028580 containerd[1465]: time="2025-03-20T18:09:25.028074739Z" level=info msg="CreateContainer within sandbox \"72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 20 18:09:25.034446 containerd[1465]: time="2025-03-20T18:09:25.034416084Z" level=info msg="Container e6cf95121d62171aa7e62d1aac0a866c638494d16f7b5a0a102be50bd50ff839: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:25.040807 containerd[1465]: time="2025-03-20T18:09:25.040742587Z" level=info msg="CreateContainer within sandbox \"72a67d0a7c571cc388a60e3b7a6fc4a40f72f7856d435286d908a2e028a33163\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e6cf95121d62171aa7e62d1aac0a866c638494d16f7b5a0a102be50bd50ff839\"" Mar 20 18:09:25.041310 containerd[1465]: time="2025-03-20T18:09:25.041270816Z" level=info msg="StartContainer for \"e6cf95121d62171aa7e62d1aac0a866c638494d16f7b5a0a102be50bd50ff839\"" Mar 20 18:09:25.042353 containerd[1465]: time="2025-03-20T18:09:25.042323593Z" level=info msg="connecting to shim e6cf95121d62171aa7e62d1aac0a866c638494d16f7b5a0a102be50bd50ff839" address="unix:///run/containerd/s/15751f7a79f59d3a34cd478899a52be9113f1a27c2bc42d07f18dc794fa67cbe" protocol=ttrpc version=3 Mar 20 18:09:25.064582 systemd[1]: Started cri-containerd-e6cf95121d62171aa7e62d1aac0a866c638494d16f7b5a0a102be50bd50ff839.scope - libcontainer container e6cf95121d62171aa7e62d1aac0a866c638494d16f7b5a0a102be50bd50ff839. Mar 20 18:09:25.097918 containerd[1465]: time="2025-03-20T18:09:25.097814205Z" level=info msg="StartContainer for \"e6cf95121d62171aa7e62d1aac0a866c638494d16f7b5a0a102be50bd50ff839\" returns successfully" Mar 20 18:09:25.118479 kubelet[2567]: I0320 18:09:25.118331 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76d74f9c86-ws9kv" podStartSLOduration=21.335624486 podStartE2EDuration="24.118316358s" podCreationTimestamp="2025-03-20 18:09:01 +0000 UTC" firstStartedPulling="2025-03-20 18:09:22.243732738 +0000 UTC m=+37.396690953" lastFinishedPulling="2025-03-20 18:09:25.02642461 +0000 UTC m=+40.179382825" observedRunningTime="2025-03-20 18:09:25.115695816 +0000 UTC m=+40.268654111" watchObservedRunningTime="2025-03-20 18:09:25.118316358 +0000 UTC m=+40.271274573" Mar 20 18:09:26.105375 containerd[1465]: time="2025-03-20T18:09:26.104744133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:26.105940 containerd[1465]: time="2025-03-20T18:09:26.105893234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 20 18:09:26.106128 containerd[1465]: time="2025-03-20T18:09:26.106108085Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:26.108271 containerd[1465]: time="2025-03-20T18:09:26.108236238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:26.109264 kubelet[2567]: I0320 18:09:26.109210 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 18:09:26.116446 containerd[1465]: time="2025-03-20T18:09:26.116415392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.089831773s" Mar 20 18:09:26.116521 containerd[1465]: time="2025-03-20T18:09:26.116451313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 20 18:09:26.117670 containerd[1465]: time="2025-03-20T18:09:26.117462767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 20 18:09:26.118560 containerd[1465]: time="2025-03-20T18:09:26.118529664Z" level=info msg="CreateContainer within sandbox \"7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 20 18:09:26.124903 containerd[1465]: time="2025-03-20T18:09:26.124853719Z" level=info msg="Container 8ae182e8382e029ab07e8f5e77ba23041bbc8308973417cd4d9c50b87fc0e9a2: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:26.134400 containerd[1465]: time="2025-03-20T18:09:26.134350902Z" level=info msg="CreateContainer within sandbox \"7eb7b4fc6aced18d67c5b7bd2c511d0908293fe0953900370a4a245a082c511b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8ae182e8382e029ab07e8f5e77ba23041bbc8308973417cd4d9c50b87fc0e9a2\"" Mar 20 18:09:26.135347 containerd[1465]: time="2025-03-20T18:09:26.134855849Z" level=info msg="StartContainer for \"8ae182e8382e029ab07e8f5e77ba23041bbc8308973417cd4d9c50b87fc0e9a2\"" Mar 20 18:09:26.136556 containerd[1465]: time="2025-03-20T18:09:26.136526338Z" level=info msg="connecting to shim 8ae182e8382e029ab07e8f5e77ba23041bbc8308973417cd4d9c50b87fc0e9a2" address="unix:///run/containerd/s/936ff5f1213a1b6a243a0a1decb0108753107af46f7b21f9f7f4a0659919a708" protocol=ttrpc version=3 Mar 20 18:09:26.157477 systemd[1]: Started cri-containerd-8ae182e8382e029ab07e8f5e77ba23041bbc8308973417cd4d9c50b87fc0e9a2.scope - libcontainer container 8ae182e8382e029ab07e8f5e77ba23041bbc8308973417cd4d9c50b87fc0e9a2. Mar 20 18:09:26.194041 containerd[1465]: time="2025-03-20T18:09:26.193996104Z" level=info msg="StartContainer for \"8ae182e8382e029ab07e8f5e77ba23041bbc8308973417cd4d9c50b87fc0e9a2\" returns successfully" Mar 20 18:09:26.357101 containerd[1465]: time="2025-03-20T18:09:26.356991183Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:09:26.357555 containerd[1465]: time="2025-03-20T18:09:26.357508411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 20 18:09:26.359472 containerd[1465]: time="2025-03-20T18:09:26.359443193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 241.946785ms" Mar 20 18:09:26.359817 containerd[1465]: time="2025-03-20T18:09:26.359476235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 20 18:09:26.361332 containerd[1465]: time="2025-03-20T18:09:26.361306092Z" level=info msg="CreateContainer within sandbox \"d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 20 18:09:26.368522 containerd[1465]: time="2025-03-20T18:09:26.368471032Z" level=info msg="Container c3a00c9ef887de487afa1e606624f3e39a0a1488d26494e76e921b2be60e4c83: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:09:26.374480 containerd[1465]: time="2025-03-20T18:09:26.374435268Z" level=info msg="CreateContainer within sandbox \"d9ce946281fada5edc588bdb804b54dfb501927b9d806dec6624840f890bab3e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c3a00c9ef887de487afa1e606624f3e39a0a1488d26494e76e921b2be60e4c83\"" Mar 20 18:09:26.375166 containerd[1465]: time="2025-03-20T18:09:26.375107263Z" level=info msg="StartContainer for \"c3a00c9ef887de487afa1e606624f3e39a0a1488d26494e76e921b2be60e4c83\"" Mar 20 18:09:26.376322 containerd[1465]: time="2025-03-20T18:09:26.376245804Z" level=info msg="connecting to shim c3a00c9ef887de487afa1e606624f3e39a0a1488d26494e76e921b2be60e4c83" address="unix:///run/containerd/s/957cc37fd571038f0b2f3806bc7b5cdb35ea4738694a982dd0d4acea8feedf95" protocol=ttrpc version=3 Mar 20 18:09:26.395454 systemd[1]: Started cri-containerd-c3a00c9ef887de487afa1e606624f3e39a0a1488d26494e76e921b2be60e4c83.scope - libcontainer container c3a00c9ef887de487afa1e606624f3e39a0a1488d26494e76e921b2be60e4c83. Mar 20 18:09:26.430982 containerd[1465]: time="2025-03-20T18:09:26.430942943Z" level=info msg="StartContainer for \"c3a00c9ef887de487afa1e606624f3e39a0a1488d26494e76e921b2be60e4c83\" returns successfully" Mar 20 18:09:27.016384 kubelet[2567]: I0320 18:09:27.016343 2567 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 20 18:09:27.017609 kubelet[2567]: I0320 18:09:27.017588 2567 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 20 18:09:27.382751 kubelet[2567]: I0320 18:09:27.382670 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bsfqz" podStartSLOduration=21.66941447 podStartE2EDuration="26.382561646s" podCreationTimestamp="2025-03-20 18:09:01 +0000 UTC" firstStartedPulling="2025-03-20 18:09:21.404196505 +0000 UTC m=+36.557154720" lastFinishedPulling="2025-03-20 18:09:26.117343681 +0000 UTC m=+41.270301896" observedRunningTime="2025-03-20 18:09:27.382380317 +0000 UTC m=+42.535338532" watchObservedRunningTime="2025-03-20 18:09:27.382561646 +0000 UTC m=+42.535519861" Mar 20 18:09:28.118451 kubelet[2567]: I0320 18:09:28.118406 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 18:09:29.272479 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:35540.service - OpenSSH per-connection server daemon (10.0.0.1:35540). Mar 20 18:09:29.342357 sshd[4733]: Accepted publickey for core from 10.0.0.1 port 35540 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:29.347090 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:29.354663 systemd-logind[1442]: New session 13 of user core. Mar 20 18:09:29.364431 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 18:09:29.538366 sshd[4735]: Connection closed by 10.0.0.1 port 35540 Mar 20 18:09:29.538696 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:29.553464 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:35540.service: Deactivated successfully. Mar 20 18:09:29.554946 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 18:09:29.555685 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Mar 20 18:09:29.557466 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:35546.service - OpenSSH per-connection server daemon (10.0.0.1:35546). Mar 20 18:09:29.558356 systemd-logind[1442]: Removed session 13. Mar 20 18:09:29.610289 sshd[4749]: Accepted publickey for core from 10.0.0.1 port 35546 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:29.611579 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:29.616341 systemd-logind[1442]: New session 14 of user core. Mar 20 18:09:29.632431 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 18:09:29.831405 sshd[4752]: Connection closed by 10.0.0.1 port 35546 Mar 20 18:09:29.831867 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:29.840462 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:35546.service: Deactivated successfully. Mar 20 18:09:29.842769 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 18:09:29.844109 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Mar 20 18:09:29.845506 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:35552.service - OpenSSH per-connection server daemon (10.0.0.1:35552). Mar 20 18:09:29.846198 systemd-logind[1442]: Removed session 14. Mar 20 18:09:29.902408 sshd[4762]: Accepted publickey for core from 10.0.0.1 port 35552 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:29.903779 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:29.907880 systemd-logind[1442]: New session 15 of user core. Mar 20 18:09:29.918492 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 18:09:30.245410 kubelet[2567]: I0320 18:09:30.244872 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 18:09:30.263642 kubelet[2567]: I0320 18:09:30.263131 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76d74f9c86-4ns2k" podStartSLOduration=26.193964147 podStartE2EDuration="29.263116182s" podCreationTimestamp="2025-03-20 18:09:01 +0000 UTC" firstStartedPulling="2025-03-20 18:09:23.290901511 +0000 UTC m=+38.443859726" lastFinishedPulling="2025-03-20 18:09:26.360053546 +0000 UTC m=+41.513011761" observedRunningTime="2025-03-20 18:09:27.395793372 +0000 UTC m=+42.548751627" watchObservedRunningTime="2025-03-20 18:09:30.263116182 +0000 UTC m=+45.416074397" Mar 20 18:09:31.358510 sshd[4765]: Connection closed by 10.0.0.1 port 35552 Mar 20 18:09:31.359037 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:31.372963 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:35552.service: Deactivated successfully. Mar 20 18:09:31.376687 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 18:09:31.379543 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Mar 20 18:09:31.380815 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:35568.service - OpenSSH per-connection server daemon (10.0.0.1:35568). Mar 20 18:09:31.382648 systemd-logind[1442]: Removed session 15. Mar 20 18:09:31.441829 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 35568 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:31.443578 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:31.485530 systemd-logind[1442]: New session 16 of user core. Mar 20 18:09:31.495428 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 18:09:31.829304 sshd[4790]: Connection closed by 10.0.0.1 port 35568 Mar 20 18:09:31.829101 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:31.843410 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:35568.service: Deactivated successfully. Mar 20 18:09:31.845533 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 18:09:31.846419 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Mar 20 18:09:31.850002 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:35574.service - OpenSSH per-connection server daemon (10.0.0.1:35574). Mar 20 18:09:31.851149 systemd-logind[1442]: Removed session 16. Mar 20 18:09:31.905742 sshd[4801]: Accepted publickey for core from 10.0.0.1 port 35574 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:31.907092 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:31.911973 systemd-logind[1442]: New session 17 of user core. Mar 20 18:09:31.923463 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 18:09:32.063259 sshd[4804]: Connection closed by 10.0.0.1 port 35574 Mar 20 18:09:32.063604 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:32.066890 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:35574.service: Deactivated successfully. Mar 20 18:09:32.070037 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 18:09:32.070728 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Mar 20 18:09:32.071550 systemd-logind[1442]: Removed session 17. Mar 20 18:09:36.687053 containerd[1465]: time="2025-03-20T18:09:36.687013499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c2b786f1ca1701de294a74093e2da935b96ddcd0bb2aa6dcbcab6396a028675\" id:\"785ec0ab735b3e6dd666cbc9b329cedd35660451682070f87e136d032a7e560a\" pid:4837 exited_at:{seconds:1742494176 nanos:686738847}" Mar 20 18:09:37.080204 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:36606.service - OpenSSH per-connection server daemon (10.0.0.1:36606). Mar 20 18:09:37.139078 sshd[4854]: Accepted publickey for core from 10.0.0.1 port 36606 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:37.140522 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:37.144875 systemd-logind[1442]: New session 18 of user core. Mar 20 18:09:37.156498 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 18:09:37.299790 sshd[4856]: Connection closed by 10.0.0.1 port 36606 Mar 20 18:09:37.300117 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:37.303371 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Mar 20 18:09:37.303495 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:36606.service: Deactivated successfully. Mar 20 18:09:37.305481 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 18:09:37.306925 systemd-logind[1442]: Removed session 18. Mar 20 18:09:42.311844 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:36612.service - OpenSSH per-connection server daemon (10.0.0.1:36612). Mar 20 18:09:42.373060 sshd[4870]: Accepted publickey for core from 10.0.0.1 port 36612 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:42.375204 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:42.380503 systemd-logind[1442]: New session 19 of user core. Mar 20 18:09:42.388428 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 18:09:42.613219 sshd[4872]: Connection closed by 10.0.0.1 port 36612 Mar 20 18:09:42.613934 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:42.618175 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:36612.service: Deactivated successfully. Mar 20 18:09:42.620998 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 18:09:42.622433 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Mar 20 18:09:42.625648 systemd-logind[1442]: Removed session 19. Mar 20 18:09:42.669589 kubelet[2567]: I0320 18:09:42.669546 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 18:09:45.288737 containerd[1465]: time="2025-03-20T18:09:45.288695797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec2ec5b320aa101ae04c31654f01b66feb1b5012914854c560107b7dda6372cd\" id:\"0c2dbf920e997e0f0b443c8a3ed2c0b3c80f77d649b702de89caa59d6d997dc3\" pid:4904 exited_at:{seconds:1742494185 nanos:288499269}" Mar 20 18:09:47.630236 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:60498.service - OpenSSH per-connection server daemon (10.0.0.1:60498). Mar 20 18:09:47.673067 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 60498 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:09:47.674602 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:09:47.678984 systemd-logind[1442]: New session 20 of user core. Mar 20 18:09:47.687478 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 18:09:47.853951 sshd[4917]: Connection closed by 10.0.0.1 port 60498 Mar 20 18:09:47.854293 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Mar 20 18:09:47.856931 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:60498.service: Deactivated successfully. Mar 20 18:09:47.858839 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 18:09:47.860299 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Mar 20 18:09:47.861084 systemd-logind[1442]: Removed session 20.